00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2461 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3722 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.166 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.166 The recommended git tool is: git 00:00:00.167 using credential 00000000-0000-0000-0000-000000000002 00:00:00.168 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.201 Fetching changes from the remote Git repository 00:00:00.203 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.250 Using shallow fetch with depth 1 00:00:00.250 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.250 > git --version # timeout=10 00:00:00.295 > git --version # 'git version 2.39.2' 00:00:00.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.325 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.325 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.258 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.270 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.281 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.281 > git config core.sparsecheckout # timeout=10 00:00:05.292 > git read-tree -mu HEAD # timeout=10 00:00:05.308 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.326 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.326 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.434 [Pipeline] Start of Pipeline 00:00:05.448 [Pipeline] library 00:00:05.450 Loading library shm_lib@master 00:00:05.450 Library shm_lib@master is cached. Copying from home. 00:00:05.464 [Pipeline] node 00:00:05.476 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.478 [Pipeline] { 00:00:05.486 [Pipeline] catchError 00:00:05.487 [Pipeline] { 00:00:05.496 [Pipeline] wrap 00:00:05.503 [Pipeline] { 00:00:05.508 [Pipeline] stage 00:00:05.509 [Pipeline] { (Prologue) 00:00:05.522 [Pipeline] echo 00:00:05.523 Node: VM-host-SM38 00:00:05.528 [Pipeline] cleanWs 00:00:05.537 [WS-CLEANUP] Deleting project workspace... 00:00:05.537 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.543 [WS-CLEANUP] done 00:00:05.787 [Pipeline] setCustomBuildProperty 00:00:05.883 [Pipeline] httpRequest 00:00:06.468 [Pipeline] echo 00:00:06.470 Sorcerer 10.211.164.20 is alive 00:00:06.479 [Pipeline] retry 00:00:06.480 [Pipeline] { 00:00:06.494 [Pipeline] httpRequest 00:00:06.499 HttpMethod: GET 00:00:06.500 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.501 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.510 Response Code: HTTP/1.1 200 OK 00:00:06.511 Success: Status code 200 is in the accepted range: 200,404 00:00:06.511 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.229 [Pipeline] } 00:00:11.247 [Pipeline] // retry 00:00:11.254 [Pipeline] sh 00:00:11.541 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.559 [Pipeline] httpRequest 00:00:11.928 [Pipeline] echo 00:00:11.929 Sorcerer 10.211.164.20 is alive 00:00:11.937 [Pipeline] retry 00:00:11.939 [Pipeline] { 00:00:11.951 [Pipeline] httpRequest 00:00:11.955 HttpMethod: GET 00:00:11.956 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.956 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.970 Response Code: HTTP/1.1 200 OK 00:00:11.970 Success: Status code 200 is in the accepted range: 200,404 00:00:11.971 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:00.294 [Pipeline] } 00:01:00.312 [Pipeline] // retry 00:01:00.319 [Pipeline] sh 00:01:00.610 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:03.165 [Pipeline] sh 00:01:03.450 + git -C spdk log --oneline -n5 00:01:03.450 c13c99a5e test: Various fixes for Fedora40 00:01:03.450 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:03.450 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:03.450 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:03.450 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:03.470 [Pipeline] writeFile 00:01:03.486 [Pipeline] sh 00:01:03.772 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:03.786 [Pipeline] sh 00:01:04.072 + cat autorun-spdk.conf 00:01:04.072 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.072 SPDK_TEST_NVME=1 00:01:04.072 SPDK_TEST_FTL=1 00:01:04.072 SPDK_TEST_ISAL=1 00:01:04.072 SPDK_RUN_ASAN=1 00:01:04.072 SPDK_RUN_UBSAN=1 00:01:04.072 SPDK_TEST_XNVME=1 00:01:04.072 SPDK_TEST_NVME_FDP=1 00:01:04.072 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.081 RUN_NIGHTLY=1 00:01:04.083 [Pipeline] } 00:01:04.097 [Pipeline] // stage 00:01:04.112 [Pipeline] stage 00:01:04.114 [Pipeline] { (Run VM) 00:01:04.127 [Pipeline] sh 00:01:04.413 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:04.414 + echo 'Start stage prepare_nvme.sh' 00:01:04.414 Start stage prepare_nvme.sh 00:01:04.414 + [[ -n 8 ]] 00:01:04.414 + disk_prefix=ex8 00:01:04.414 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:04.414 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:04.414 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:04.414 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.414 ++ SPDK_TEST_NVME=1 00:01:04.414 ++ SPDK_TEST_FTL=1 00:01:04.414 ++ SPDK_TEST_ISAL=1 00:01:04.414 ++ SPDK_RUN_ASAN=1 00:01:04.414 ++ SPDK_RUN_UBSAN=1 00:01:04.414 ++ SPDK_TEST_XNVME=1 00:01:04.414 ++ SPDK_TEST_NVME_FDP=1 00:01:04.414 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.414 ++ RUN_NIGHTLY=1 00:01:04.414 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:04.414 + nvme_files=() 00:01:04.414 + declare -A nvme_files 00:01:04.414 + backend_dir=/var/lib/libvirt/images/backends 00:01:04.414 + nvme_files['nvme.img']=5G 00:01:04.414 + nvme_files['nvme-cmb.img']=5G 00:01:04.414 + nvme_files['nvme-multi0.img']=4G 00:01:04.414 + nvme_files['nvme-multi1.img']=4G 00:01:04.414 + nvme_files['nvme-multi2.img']=4G 00:01:04.414 + nvme_files['nvme-openstack.img']=8G 00:01:04.414 + nvme_files['nvme-zns.img']=5G 00:01:04.414 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:04.414 + (( SPDK_TEST_FTL == 1 )) 00:01:04.414 + nvme_files["nvme-ftl.img"]=6G 00:01:04.414 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:04.414 + nvme_files["nvme-fdp.img"]=1G 00:01:04.414 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:04.414 + for nvme in "${!nvme_files[@]}" 00:01:04.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:04.414 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.414 + for nvme in "${!nvme_files[@]}" 00:01:04.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:04.414 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:04.414 + for nvme in "${!nvme_files[@]}" 00:01:04.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:04.414 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.414 + for nvme in "${!nvme_files[@]}" 00:01:04.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:04.414 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:04.414 + for nvme in "${!nvme_files[@]}" 00:01:04.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:04.988 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.249 + for nvme in "${!nvme_files[@]}" 00:01:05.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:05.249 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.249 + for nvme in "${!nvme_files[@]}" 00:01:05.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:05.249 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.249 + for nvme in "${!nvme_files[@]}" 00:01:05.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:05.249 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:05.249 + for nvme in "${!nvme_files[@]}" 00:01:05.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:05.820 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.820 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:05.820 + echo 'End stage prepare_nvme.sh' 00:01:05.820 End stage prepare_nvme.sh 00:01:05.831 [Pipeline] sh 00:01:06.112 + DISTRO=fedora39 00:01:06.112 + CPUS=10 00:01:06.112 + RAM=12288 00:01:06.112 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:06.112 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:06.112 00:01:06.112 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:06.113 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:06.113 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:06.113 HELP=0 00:01:06.113 DRY_RUN=0 00:01:06.113 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:06.113 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:06.113 NVME_AUTO_CREATE=0 00:01:06.113 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:06.113 NVME_CMB=,,,, 00:01:06.113 NVME_PMR=,,,, 00:01:06.113 NVME_ZNS=,,,, 00:01:06.113 NVME_MS=true,,,, 00:01:06.113 NVME_FDP=,,,on, 00:01:06.113 SPDK_VAGRANT_DISTRO=fedora39 00:01:06.113 SPDK_VAGRANT_VMCPU=10 00:01:06.113 SPDK_VAGRANT_VMRAM=12288 00:01:06.113 SPDK_VAGRANT_PROVIDER=libvirt 00:01:06.113 SPDK_VAGRANT_HTTP_PROXY= 00:01:06.113 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:06.113 SPDK_OPENSTACK_NETWORK=0 00:01:06.113 VAGRANT_PACKAGE_BOX=0 00:01:06.113 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:06.113 FORCE_DISTRO=true 00:01:06.113 VAGRANT_BOX_VERSION= 00:01:06.113 EXTRA_VAGRANTFILES= 00:01:06.113 NIC_MODEL=e1000 00:01:06.113 00:01:06.113 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:06.113 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:08.663 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.925 ==> default: Creating image (snapshot of base box volume). 00:01:09.188 ==> default: Creating domain with the following settings... 00:01:09.188 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734132879_7f6dbfa6b9971ed1f28c 00:01:09.188 ==> default: -- Domain type: kvm 00:01:09.188 ==> default: -- Cpus: 10 00:01:09.188 ==> default: -- Feature: acpi 00:01:09.188 ==> default: -- Feature: apic 00:01:09.188 ==> default: -- Feature: pae 00:01:09.188 ==> default: -- Memory: 12288M 00:01:09.188 ==> default: -- Memory Backing: hugepages: 00:01:09.188 ==> default: -- Management MAC: 00:01:09.188 ==> default: -- Loader: 00:01:09.188 ==> default: -- Nvram: 00:01:09.188 ==> default: -- Base box: spdk/fedora39 00:01:09.188 ==> default: -- Storage pool: default 00:01:09.188 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734132879_7f6dbfa6b9971ed1f28c.img (20G) 00:01:09.188 ==> default: -- Volume Cache: default 00:01:09.188 ==> default: -- Kernel: 00:01:09.188 ==> default: -- Initrd: 00:01:09.188 ==> default: -- Graphics Type: vnc 00:01:09.188 ==> default: -- Graphics Port: -1 00:01:09.188 ==> default: -- Graphics IP: 127.0.0.1 00:01:09.188 ==> default: -- Graphics Password: Not defined 00:01:09.188 ==> default: -- Video Type: cirrus 00:01:09.188 ==> default: -- Video VRAM: 9216 00:01:09.188 ==> default: -- Sound Type: 00:01:09.188 ==> default: -- Keymap: en-us 00:01:09.188 ==> default: -- TPM Path: 00:01:09.188 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:09.188 ==> default: -- Command line args: 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:09.188 ==> default: -> value=-drive, 00:01:09.188 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:09.188 ==> default: -> value=-drive, 00:01:09.188 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:09.188 ==> default: -> value=-drive, 00:01:09.188 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.188 ==> default: -> value=-drive, 00:01:09.188 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.188 ==> default: -> value=-drive, 00:01:09.188 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:09.188 ==> default: -> value=-drive, 00:01:09.188 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:09.188 ==> default: -> value=-device, 00:01:09.188 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.188 ==> default: Creating shared folders metadata... 00:01:09.449 ==> default: Starting domain. 00:01:11.368 ==> default: Waiting for domain to get an IP address... 00:01:26.279 ==> default: Waiting for SSH to become available... 00:01:27.658 ==> default: Configuring and enabling network interfaces... 00:01:31.860 default: SSH address: 192.168.121.252:22 00:01:31.860 default: SSH username: vagrant 00:01:31.860 default: SSH auth method: private key 00:01:33.771 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:41.906 ==> default: Mounting SSHFS shared folder... 00:01:43.291 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:43.291 ==> default: Checking Mount.. 00:01:44.677 ==> default: Folder Successfully Mounted! 00:01:44.677 00:01:44.677 SUCCESS! 00:01:44.677 00:01:44.677 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:44.677 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.677 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:44.677 00:01:44.688 [Pipeline] } 00:01:44.702 [Pipeline] // stage 00:01:44.710 [Pipeline] dir 00:01:44.711 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:44.712 [Pipeline] { 00:01:44.724 [Pipeline] catchError 00:01:44.726 [Pipeline] { 00:01:44.737 [Pipeline] sh 00:01:45.023 + vagrant ssh-config --host vagrant 00:01:45.023 + sed -ne '/^Host/,$p' 00:01:45.023 + tee ssh_conf 00:01:47.628 Host vagrant 00:01:47.628 HostName 192.168.121.252 00:01:47.628 User vagrant 00:01:47.628 Port 22 00:01:47.628 UserKnownHostsFile /dev/null 00:01:47.628 StrictHostKeyChecking no 00:01:47.628 PasswordAuthentication no 00:01:47.628 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:47.628 IdentitiesOnly yes 00:01:47.628 LogLevel FATAL 00:01:47.628 ForwardAgent yes 00:01:47.628 ForwardX11 yes 00:01:47.628 00:01:47.643 [Pipeline] withEnv 00:01:47.645 [Pipeline] { 00:01:47.658 [Pipeline] sh 00:01:47.943 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:47.943 source /etc/os-release 00:01:47.943 [[ -e /image.version ]] && img=$(< /image.version) 00:01:47.943 # Minimal, systemd-like check. 00:01:47.943 if [[ -e /.dockerenv ]]; then 00:01:47.943 # Clear garbage from the node'\''s name: 00:01:47.943 # agt-er_autotest_547-896 -> autotest_547-896 00:01:47.943 # $HOSTNAME is the actual container id 00:01:47.943 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:47.943 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:47.943 # We can assume this is a mount from a host where container is running, 00:01:47.943 # so fetch its hostname to easily identify the target swarm worker. 00:01:47.943 container="$(< /etc/hostname) ($agent)" 00:01:47.943 else 00:01:47.943 # Fallback 00:01:47.943 container=$agent 00:01:47.943 fi 00:01:47.943 fi 00:01:47.943 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:47.943 ' 00:01:48.218 [Pipeline] } 00:01:48.233 [Pipeline] // withEnv 00:01:48.242 [Pipeline] setCustomBuildProperty 00:01:48.256 [Pipeline] stage 00:01:48.258 [Pipeline] { (Tests) 00:01:48.274 [Pipeline] sh 00:01:48.559 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:48.835 [Pipeline] sh 00:01:49.120 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.397 [Pipeline] timeout 00:01:49.397 Timeout set to expire in 50 min 00:01:49.399 [Pipeline] { 00:01:49.413 [Pipeline] sh 00:01:49.698 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:50.270 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:50.284 [Pipeline] sh 00:01:50.570 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:50.847 [Pipeline] sh 00:01:51.134 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.413 [Pipeline] sh 00:01:51.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:51.749 ++ readlink -f spdk_repo 00:01:51.749 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.749 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.749 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.749 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.749 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.749 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.749 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.749 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:51.749 + cd /home/vagrant/spdk_repo 00:01:51.749 + source /etc/os-release 00:01:51.749 ++ NAME='Fedora Linux' 00:01:51.749 ++ VERSION='39 (Cloud Edition)' 00:01:51.749 ++ ID=fedora 00:01:51.749 ++ VERSION_ID=39 00:01:51.749 ++ VERSION_CODENAME= 00:01:51.749 ++ PLATFORM_ID=platform:f39 00:01:51.749 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:51.749 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.749 ++ LOGO=fedora-logo-icon 00:01:51.749 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:51.749 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.749 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:51.749 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.749 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.749 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.749 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:51.749 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.749 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:51.749 ++ SUPPORT_END=2024-11-12 00:01:51.749 ++ VARIANT='Cloud Edition' 00:01:51.749 ++ VARIANT_ID=cloud 00:01:51.749 + uname -a 00:01:51.749 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:51.749 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.011 Hugepages 00:01:52.011 node hugesize free / total 00:01:52.011 node0 1048576kB 0 / 0 00:01:52.011 node0 2048kB 0 / 0 00:01:52.011 00:01:52.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.011 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.011 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.011 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:52.011 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:52.011 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:52.011 + rm -f /tmp/spdk-ld-path 00:01:52.011 + source autorun-spdk.conf 00:01:52.011 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.011 ++ SPDK_TEST_NVME=1 00:01:52.011 ++ SPDK_TEST_FTL=1 00:01:52.011 ++ SPDK_TEST_ISAL=1 00:01:52.011 ++ SPDK_RUN_ASAN=1 00:01:52.011 ++ SPDK_RUN_UBSAN=1 00:01:52.011 ++ SPDK_TEST_XNVME=1 00:01:52.011 ++ SPDK_TEST_NVME_FDP=1 00:01:52.011 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.011 ++ RUN_NIGHTLY=1 00:01:52.011 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.011 + [[ -n '' ]] 00:01:52.011 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.273 + for M in /var/spdk/build-*-manifest.txt 00:01:52.273 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:52.273 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.273 + for M in /var/spdk/build-*-manifest.txt 00:01:52.273 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.273 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.273 + for M in /var/spdk/build-*-manifest.txt 00:01:52.273 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.273 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.273 ++ uname 00:01:52.273 + [[ Linux == \L\i\n\u\x ]] 00:01:52.273 + sudo dmesg -T 00:01:52.273 + sudo dmesg --clear 00:01:52.273 + dmesg_pid=4980 00:01:52.273 + [[ Fedora Linux == FreeBSD ]] 00:01:52.273 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.273 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.273 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.273 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.273 + sudo dmesg -Tw 00:01:52.273 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.273 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.273 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.273 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.273 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.273 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.273 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.273 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.273 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.273 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.273 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.273 Test configuration: 00:01:52.273 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.273 SPDK_TEST_NVME=1 00:01:52.273 SPDK_TEST_FTL=1 00:01:52.273 SPDK_TEST_ISAL=1 00:01:52.273 SPDK_RUN_ASAN=1 00:01:52.273 SPDK_RUN_UBSAN=1 00:01:52.273 SPDK_TEST_XNVME=1 00:01:52.273 SPDK_TEST_NVME_FDP=1 00:01:52.273 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.273 RUN_NIGHTLY=1 23:35:22 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:52.273 23:35:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.273 23:35:22 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.273 23:35:22 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.273 23:35:22 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.273 23:35:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.273 23:35:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.273 23:35:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.273 23:35:22 -- paths/export.sh@5 -- $ export PATH 00:01:52.273 23:35:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.273 23:35:22 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.273 23:35:22 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:52.273 23:35:22 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734132922.XXXXXX 00:01:52.273 23:35:22 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734132922.qFawFj 00:01:52.273 23:35:22 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:52.273 23:35:22 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:52.273 23:35:22 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:52.273 23:35:22 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.273 23:35:22 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.273 23:35:22 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:52.273 23:35:22 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:52.273 23:35:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.273 23:35:22 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:52.273 23:35:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.273 23:35:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.273 23:35:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.273 23:35:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.273 Fri Dec 13 11:35:23 PM UTC 2024 00:01:52.273 23:35:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.536 LTS-67-gc13c99a5e 00:01:52.536 23:35:23 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.536 23:35:23 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.536 23:35:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:52.536 23:35:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:52.536 23:35:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.536 ************************************ 00:01:52.536 START TEST asan 00:01:52.536 ************************************ 00:01:52.536 using asan 00:01:52.536 23:35:23 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:52.536 00:01:52.536 real 0m0.000s 00:01:52.536 user 0m0.000s 00:01:52.536 sys 0m0.000s 00:01:52.536 23:35:23 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:52.536 ************************************ 00:01:52.536 END TEST asan 00:01:52.536 ************************************ 00:01:52.536 23:35:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.536 23:35:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.536 23:35:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.536 23:35:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:52.536 23:35:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:52.536 23:35:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.536 ************************************ 00:01:52.536 START TEST ubsan 00:01:52.536 ************************************ 00:01:52.536 using ubsan 00:01:52.536 23:35:23 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:52.536 00:01:52.536 real 0m0.000s 00:01:52.536 user 0m0.000s 00:01:52.536 sys 0m0.000s 00:01:52.536 ************************************ 00:01:52.536 END TEST ubsan 00:01:52.536 ************************************ 00:01:52.536 23:35:23 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:52.536 23:35:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.536 23:35:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:52.536 23:35:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:52.536 23:35:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:52.536 23:35:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:52.536 23:35:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:52.536 23:35:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:52.536 23:35:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:52.536 23:35:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:52.536 23:35:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:52.536 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:52.536 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:53.107 Using 'verbs' RDMA provider 00:02:05.916 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:18.180 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:18.180 Creating mk/config.mk...done. 00:02:18.180 Creating mk/cc.flags.mk...done. 00:02:18.180 Type 'make' to build. 00:02:18.180 23:35:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:18.180 23:35:47 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:18.180 23:35:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:18.180 23:35:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.180 ************************************ 00:02:18.180 START TEST make 00:02:18.180 ************************************ 00:02:18.180 23:35:47 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:18.180 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:18.180 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:18.180 meson setup builddir \ 00:02:18.180 -Dwith-libaio=enabled \ 00:02:18.180 -Dwith-liburing=enabled \ 00:02:18.180 -Dwith-libvfn=disabled \ 00:02:18.180 -Dwith-spdk=false && \ 00:02:18.180 meson compile -C builddir && \ 00:02:18.180 cd -) 00:02:18.180 make[1]: Nothing to be done for 'all'. 00:02:19.566 The Meson build system 00:02:19.566 Version: 1.5.0 00:02:19.566 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:19.566 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:19.566 Build type: native build 00:02:19.566 Project name: xnvme 00:02:19.566 Project version: 0.7.3 00:02:19.566 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.566 C linker for the host machine: cc ld.bfd 2.40-14 00:02:19.566 Host machine cpu family: x86_64 00:02:19.566 Host machine cpu: x86_64 00:02:19.566 Message: host_machine.system: linux 00:02:19.566 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:19.566 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:19.566 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.566 Run-time dependency threads found: YES 00:02:19.566 Has header "setupapi.h" : NO 00:02:19.566 Has header "linux/blkzoned.h" : YES 00:02:19.566 Has header "linux/blkzoned.h" : YES (cached) 00:02:19.566 Has header "libaio.h" : YES 00:02:19.566 Library aio found: YES 00:02:19.566 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.566 Run-time dependency liburing found: YES 2.2 00:02:19.566 Dependency libvfn skipped: feature with-libvfn disabled 00:02:19.566 Run-time dependency appleframeworks found: NO (tried framework) 00:02:19.566 Run-time dependency appleframeworks found: NO (tried framework) 00:02:19.566 Configuring xnvme_config.h using configuration 00:02:19.566 Configuring xnvme.spec using configuration 00:02:19.566 Run-time dependency bash-completion found: YES 2.11 00:02:19.566 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:19.566 Program cp found: YES (/usr/bin/cp) 00:02:19.566 Has header "winsock2.h" : NO 00:02:19.566 Has header "dbghelp.h" : NO 00:02:19.566 Library rpcrt4 found: NO 00:02:19.566 Library rt found: YES 00:02:19.566 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:19.566 Found CMake: /usr/bin/cmake (3.27.7) 00:02:19.566 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:19.566 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:19.566 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:19.566 Build targets in project: 32 00:02:19.566 00:02:19.566 xnvme 0.7.3 00:02:19.566 00:02:19.566 User defined options 00:02:19.566 with-libaio : enabled 00:02:19.566 with-liburing: enabled 00:02:19.566 with-libvfn : disabled 00:02:19.566 with-spdk : false 00:02:19.566 00:02:19.566 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.828 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:19.828 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:20.090 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:20.090 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:20.090 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:20.090 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:20.090 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:20.090 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:20.090 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:20.090 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:20.090 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:20.090 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:20.090 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:20.090 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:20.090 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:20.090 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:20.090 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:20.090 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:20.351 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:20.351 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:20.351 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:20.351 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:20.351 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:20.351 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:20.351 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:20.351 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:20.351 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:20.351 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:20.351 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:20.351 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:20.351 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:20.351 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:20.351 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:20.351 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:20.351 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:20.351 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:20.351 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:20.351 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:20.351 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:20.351 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:20.351 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:20.351 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:20.351 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:20.351 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:20.351 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:20.351 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:20.351 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:20.351 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:20.351 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:20.351 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:20.351 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:20.351 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:20.351 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:20.351 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:20.351 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:20.613 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:20.613 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:20.613 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:20.613 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:20.613 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:20.613 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:20.613 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:20.613 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:20.613 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:20.613 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:20.613 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:20.613 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:20.613 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:20.613 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:20.613 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:20.613 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:20.613 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:20.613 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:20.613 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:20.613 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:20.613 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:20.613 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:20.613 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:20.613 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:20.613 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:20.875 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:20.875 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:20.875 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:20.875 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:20.875 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:20.875 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:20.875 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:20.875 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:20.875 [88/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:20.875 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:20.875 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:20.875 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:20.875 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:20.875 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:20.875 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:20.875 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:20.875 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:20.875 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:20.875 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:20.875 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:20.875 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:20.875 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:20.875 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:20.875 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:20.875 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:20.875 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:20.875 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:20.875 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:20.875 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:20.875 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:20.875 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:21.136 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:21.136 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:21.136 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:21.136 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:21.136 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:21.136 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:21.136 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:21.136 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:21.136 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:21.136 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:21.136 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:21.136 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:21.136 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:21.136 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:21.136 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:21.136 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:21.136 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:21.136 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:21.136 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:21.136 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:21.136 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:21.136 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:21.136 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:21.136 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:21.136 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:21.136 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:21.136 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:21.136 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:21.136 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:21.397 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:21.397 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:21.397 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:21.397 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:21.397 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:21.397 [145/203] Linking target lib/libxnvme.so 00:02:21.397 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:21.397 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:21.397 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:21.397 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:21.397 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:21.397 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:21.397 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:21.398 [153/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:21.398 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:21.398 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:21.398 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:21.398 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:21.659 [158/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:21.659 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:21.659 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:21.659 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:21.659 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:21.659 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:21.659 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:21.659 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:21.659 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:21.659 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:21.659 [168/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:21.659 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:21.659 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:21.659 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:21.920 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:21.920 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:21.920 [174/203] Linking static target lib/libxnvme.a 00:02:21.920 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:21.920 [176/203] Linking target tests/xnvme_tests_cli 00:02:21.920 [177/203] Linking target tests/xnvme_tests_lblk 00:02:21.920 [178/203] Linking target tests/xnvme_tests_scc 00:02:21.920 [179/203] Linking target tests/xnvme_tests_enum 00:02:21.920 [180/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:21.920 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:21.920 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:21.920 [183/203] Linking target tests/xnvme_tests_buf 00:02:21.920 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:21.920 [185/203] Linking target tests/xnvme_tests_znd_state 00:02:21.920 [186/203] Linking target tests/xnvme_tests_map 00:02:21.920 [187/203] Linking target tests/xnvme_tests_xnvme_file 00:02:21.920 [188/203] Linking target tools/lblk 00:02:21.920 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:21.920 [190/203] Linking target tools/xdd 00:02:21.920 [191/203] Linking target tools/xnvme 00:02:21.920 [192/203] Linking target tools/zoned 00:02:21.920 [193/203] Linking target tools/xnvme_file 00:02:21.920 [194/203] Linking target tools/kvs 00:02:21.920 [195/203] Linking target tests/xnvme_tests_kvs 00:02:21.920 [196/203] Linking target examples/xnvme_dev 00:02:21.920 [197/203] Linking target examples/xnvme_hello 00:02:21.920 [198/203] Linking target examples/xnvme_enum 00:02:21.920 [199/203] Linking target examples/xnvme_io_async 00:02:21.920 [200/203] Linking target examples/zoned_io_async 00:02:21.920 [201/203] Linking target examples/zoned_io_sync 00:02:21.920 [202/203] Linking target examples/xnvme_single_sync 00:02:21.920 [203/203] Linking target examples/xnvme_single_async 00:02:21.920 INFO: autodetecting backend as ninja 00:02:21.920 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:21.920 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:25.216 The Meson build system 00:02:25.216 Version: 1.5.0 00:02:25.216 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:25.216 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:25.216 Build type: native build 00:02:25.216 Program cat found: YES (/usr/bin/cat) 00:02:25.216 Project name: DPDK 00:02:25.216 Project version: 23.11.0 00:02:25.216 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.216 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.216 Host machine cpu family: x86_64 00:02:25.216 Host machine cpu: x86_64 00:02:25.216 Message: ## Building in Developer Mode ## 00:02:25.216 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.216 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:25.216 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.216 Program python3 found: YES (/usr/bin/python3) 00:02:25.216 Program cat found: YES (/usr/bin/cat) 00:02:25.216 Compiler for C supports arguments -march=native: YES 00:02:25.216 Checking for size of "void *" : 8 00:02:25.216 Checking for size of "void *" : 8 (cached) 00:02:25.216 Library m found: YES 00:02:25.216 Library numa found: YES 00:02:25.216 Has header "numaif.h" : YES 00:02:25.216 Library fdt found: NO 00:02:25.216 Library execinfo found: NO 00:02:25.216 Has header "execinfo.h" : YES 00:02:25.216 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.216 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.216 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.216 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.216 Run-time dependency openssl found: YES 3.1.1 00:02:25.216 Run-time dependency libpcap found: YES 1.10.4 00:02:25.216 Has header "pcap.h" with dependency libpcap: YES 00:02:25.216 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.216 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.216 Compiler for C supports arguments -Wformat: YES 00:02:25.216 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:25.216 Compiler for C supports arguments -Wformat-security: NO 00:02:25.216 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.216 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.216 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.216 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.216 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.216 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.216 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.216 Compiler for C supports arguments -Wundef: YES 00:02:25.216 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.216 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.216 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.216 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.216 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:25.216 Program objdump found: YES (/usr/bin/objdump) 00:02:25.216 Compiler for C supports arguments -mavx512f: YES 00:02:25.216 Checking if "AVX512 checking" compiles: YES 00:02:25.216 Fetching value of define "__SSE4_2__" : 1 00:02:25.216 Fetching value of define "__AES__" : 1 00:02:25.216 Fetching value of define "__AVX__" : 1 00:02:25.216 Fetching value of define "__AVX2__" : 1 00:02:25.216 Fetching value of define "__AVX512BW__" : 1 00:02:25.216 Fetching value of define "__AVX512CD__" : 1 00:02:25.216 Fetching value of define "__AVX512DQ__" : 1 00:02:25.216 Fetching value of define "__AVX512F__" : 1 00:02:25.216 Fetching value of define "__AVX512VL__" : 1 00:02:25.216 Fetching value of define "__PCLMUL__" : 1 00:02:25.216 Fetching value of define "__RDRND__" : 1 00:02:25.216 Fetching value of define "__RDSEED__" : 1 00:02:25.216 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:25.216 Fetching value of define "__znver1__" : (undefined) 00:02:25.216 Fetching value of define "__znver2__" : (undefined) 00:02:25.216 Fetching value of define "__znver3__" : (undefined) 00:02:25.216 Fetching value of define "__znver4__" : (undefined) 00:02:25.216 Library asan found: YES 00:02:25.216 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.216 Message: lib/log: Defining dependency "log" 00:02:25.216 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.216 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.216 Library rt found: YES 00:02:25.216 Checking for function "getentropy" : NO 00:02:25.216 Message: lib/eal: Defining dependency "eal" 00:02:25.216 Message: lib/ring: Defining dependency "ring" 00:02:25.216 Message: lib/rcu: Defining dependency "rcu" 00:02:25.216 Message: lib/mempool: Defining dependency "mempool" 00:02:25.216 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.216 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.216 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.216 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.216 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.216 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:25.216 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:25.216 Compiler for C supports arguments -mpclmul: YES 00:02:25.216 Compiler for C supports arguments -maes: YES 00:02:25.216 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.216 Compiler for C supports arguments -mavx512bw: YES 00:02:25.216 Compiler for C supports arguments -mavx512dq: YES 00:02:25.216 Compiler for C supports arguments -mavx512vl: YES 00:02:25.216 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.216 Compiler for C supports arguments -mavx2: YES 00:02:25.216 Compiler for C supports arguments -mavx: YES 00:02:25.216 Message: lib/net: Defining dependency "net" 00:02:25.216 Message: lib/meter: Defining dependency "meter" 00:02:25.216 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.216 Message: lib/pci: Defining dependency "pci" 00:02:25.216 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.216 Message: lib/hash: Defining dependency "hash" 00:02:25.216 Message: lib/timer: Defining dependency "timer" 00:02:25.216 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.216 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.216 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.216 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.216 Message: lib/power: Defining dependency "power" 00:02:25.216 Message: lib/reorder: Defining dependency "reorder" 00:02:25.216 Message: lib/security: Defining dependency "security" 00:02:25.216 Has header "linux/userfaultfd.h" : YES 00:02:25.216 Has header "linux/vduse.h" : YES 00:02:25.216 Message: lib/vhost: Defining dependency "vhost" 00:02:25.216 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.216 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.216 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.216 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.216 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:25.216 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:25.216 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:25.216 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:25.216 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:25.216 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:25.216 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:25.216 Configuring doxy-api-html.conf using configuration 00:02:25.216 Configuring doxy-api-man.conf using configuration 00:02:25.216 Program mandb found: YES (/usr/bin/mandb) 00:02:25.216 Program sphinx-build found: NO 00:02:25.216 Configuring rte_build_config.h using configuration 00:02:25.216 Message: 00:02:25.216 ================= 00:02:25.216 Applications Enabled 00:02:25.216 ================= 00:02:25.216 00:02:25.216 apps: 00:02:25.216 00:02:25.216 00:02:25.216 Message: 00:02:25.216 ================= 00:02:25.216 Libraries Enabled 00:02:25.216 ================= 00:02:25.216 00:02:25.216 libs: 00:02:25.216 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:25.216 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:25.216 cryptodev, dmadev, power, reorder, security, vhost, 00:02:25.216 00:02:25.216 Message: 00:02:25.216 =============== 00:02:25.216 Drivers Enabled 00:02:25.216 =============== 00:02:25.216 00:02:25.216 common: 00:02:25.216 00:02:25.216 bus: 00:02:25.216 pci, vdev, 00:02:25.216 mempool: 00:02:25.216 ring, 00:02:25.216 dma: 00:02:25.216 00:02:25.216 net: 00:02:25.216 00:02:25.216 crypto: 00:02:25.216 00:02:25.216 compress: 00:02:25.216 00:02:25.216 vdpa: 00:02:25.216 00:02:25.216 00:02:25.216 Message: 00:02:25.216 ================= 00:02:25.216 Content Skipped 00:02:25.216 ================= 00:02:25.216 00:02:25.216 apps: 00:02:25.216 dumpcap: explicitly disabled via build config 00:02:25.216 graph: explicitly disabled via build config 00:02:25.216 pdump: explicitly disabled via build config 00:02:25.216 proc-info: explicitly disabled via build config 00:02:25.216 test-acl: explicitly disabled via build config 00:02:25.216 test-bbdev: explicitly disabled via build config 00:02:25.216 test-cmdline: explicitly disabled via build config 00:02:25.216 test-compress-perf: explicitly disabled via build config 00:02:25.216 test-crypto-perf: explicitly disabled via build config 00:02:25.217 test-dma-perf: explicitly disabled via build config 00:02:25.217 test-eventdev: explicitly disabled via build config 00:02:25.217 test-fib: explicitly disabled via build config 00:02:25.217 test-flow-perf: explicitly disabled via build config 00:02:25.217 test-gpudev: explicitly disabled via build config 00:02:25.217 test-mldev: explicitly disabled via build config 00:02:25.217 test-pipeline: explicitly disabled via build config 00:02:25.217 test-pmd: explicitly disabled via build config 00:02:25.217 test-regex: explicitly disabled via build config 00:02:25.217 test-sad: explicitly disabled via build config 00:02:25.217 test-security-perf: explicitly disabled via build config 00:02:25.217 00:02:25.217 libs: 00:02:25.217 metrics: explicitly disabled via build config 00:02:25.217 acl: explicitly disabled via build config 00:02:25.217 bbdev: explicitly disabled via build config 00:02:25.217 bitratestats: explicitly disabled via build config 00:02:25.217 bpf: explicitly disabled via build config 00:02:25.217 cfgfile: explicitly disabled via build config 00:02:25.217 distributor: explicitly disabled via build config 00:02:25.217 efd: explicitly disabled via build config 00:02:25.217 eventdev: explicitly disabled via build config 00:02:25.217 dispatcher: explicitly disabled via build config 00:02:25.217 gpudev: explicitly disabled via build config 00:02:25.217 gro: explicitly disabled via build config 00:02:25.217 gso: explicitly disabled via build config 00:02:25.217 ip_frag: explicitly disabled via build config 00:02:25.217 jobstats: explicitly disabled via build config 00:02:25.217 latencystats: explicitly disabled via build config 00:02:25.217 lpm: explicitly disabled via build config 00:02:25.217 member: explicitly disabled via build config 00:02:25.217 pcapng: explicitly disabled via build config 00:02:25.217 rawdev: explicitly disabled via build config 00:02:25.217 regexdev: explicitly disabled via build config 00:02:25.217 mldev: explicitly disabled via build config 00:02:25.217 rib: explicitly disabled via build config 00:02:25.217 sched: explicitly disabled via build config 00:02:25.217 stack: explicitly disabled via build config 00:02:25.217 ipsec: explicitly disabled via build config 00:02:25.217 pdcp: explicitly disabled via build config 00:02:25.217 fib: explicitly disabled via build config 00:02:25.217 port: explicitly disabled via build config 00:02:25.217 pdump: explicitly disabled via build config 00:02:25.217 table: explicitly disabled via build config 00:02:25.217 pipeline: explicitly disabled via build config 00:02:25.217 graph: explicitly disabled via build config 00:02:25.217 node: explicitly disabled via build config 00:02:25.217 00:02:25.217 drivers: 00:02:25.217 common/cpt: not in enabled drivers build config 00:02:25.217 common/dpaax: not in enabled drivers build config 00:02:25.217 common/iavf: not in enabled drivers build config 00:02:25.217 common/idpf: not in enabled drivers build config 00:02:25.217 common/mvep: not in enabled drivers build config 00:02:25.217 common/octeontx: not in enabled drivers build config 00:02:25.217 bus/auxiliary: not in enabled drivers build config 00:02:25.217 bus/cdx: not in enabled drivers build config 00:02:25.217 bus/dpaa: not in enabled drivers build config 00:02:25.217 bus/fslmc: not in enabled drivers build config 00:02:25.217 bus/ifpga: not in enabled drivers build config 00:02:25.217 bus/platform: not in enabled drivers build config 00:02:25.217 bus/vmbus: not in enabled drivers build config 00:02:25.217 common/cnxk: not in enabled drivers build config 00:02:25.217 common/mlx5: not in enabled drivers build config 00:02:25.217 common/nfp: not in enabled drivers build config 00:02:25.217 common/qat: not in enabled drivers build config 00:02:25.217 common/sfc_efx: not in enabled drivers build config 00:02:25.217 mempool/bucket: not in enabled drivers build config 00:02:25.217 mempool/cnxk: not in enabled drivers build config 00:02:25.217 mempool/dpaa: not in enabled drivers build config 00:02:25.217 mempool/dpaa2: not in enabled drivers build config 00:02:25.217 mempool/octeontx: not in enabled drivers build config 00:02:25.217 mempool/stack: not in enabled drivers build config 00:02:25.217 dma/cnxk: not in enabled drivers build config 00:02:25.217 dma/dpaa: not in enabled drivers build config 00:02:25.217 dma/dpaa2: not in enabled drivers build config 00:02:25.217 dma/hisilicon: not in enabled drivers build config 00:02:25.217 dma/idxd: not in enabled drivers build config 00:02:25.217 dma/ioat: not in enabled drivers build config 00:02:25.217 dma/skeleton: not in enabled drivers build config 00:02:25.217 net/af_packet: not in enabled drivers build config 00:02:25.217 net/af_xdp: not in enabled drivers build config 00:02:25.217 net/ark: not in enabled drivers build config 00:02:25.217 net/atlantic: not in enabled drivers build config 00:02:25.217 net/avp: not in enabled drivers build config 00:02:25.217 net/axgbe: not in enabled drivers build config 00:02:25.217 net/bnx2x: not in enabled drivers build config 00:02:25.217 net/bnxt: not in enabled drivers build config 00:02:25.217 net/bonding: not in enabled drivers build config 00:02:25.217 net/cnxk: not in enabled drivers build config 00:02:25.217 net/cpfl: not in enabled drivers build config 00:02:25.217 net/cxgbe: not in enabled drivers build config 00:02:25.217 net/dpaa: not in enabled drivers build config 00:02:25.217 net/dpaa2: not in enabled drivers build config 00:02:25.217 net/e1000: not in enabled drivers build config 00:02:25.217 net/ena: not in enabled drivers build config 00:02:25.217 net/enetc: not in enabled drivers build config 00:02:25.217 net/enetfec: not in enabled drivers build config 00:02:25.217 net/enic: not in enabled drivers build config 00:02:25.217 net/failsafe: not in enabled drivers build config 00:02:25.217 net/fm10k: not in enabled drivers build config 00:02:25.217 net/gve: not in enabled drivers build config 00:02:25.217 net/hinic: not in enabled drivers build config 00:02:25.217 net/hns3: not in enabled drivers build config 00:02:25.217 net/i40e: not in enabled drivers build config 00:02:25.217 net/iavf: not in enabled drivers build config 00:02:25.217 net/ice: not in enabled drivers build config 00:02:25.217 net/idpf: not in enabled drivers build config 00:02:25.217 net/igc: not in enabled drivers build config 00:02:25.217 net/ionic: not in enabled drivers build config 00:02:25.217 net/ipn3ke: not in enabled drivers build config 00:02:25.217 net/ixgbe: not in enabled drivers build config 00:02:25.217 net/mana: not in enabled drivers build config 00:02:25.217 net/memif: not in enabled drivers build config 00:02:25.217 net/mlx4: not in enabled drivers build config 00:02:25.217 net/mlx5: not in enabled drivers build config 00:02:25.217 net/mvneta: not in enabled drivers build config 00:02:25.217 net/mvpp2: not in enabled drivers build config 00:02:25.217 net/netvsc: not in enabled drivers build config 00:02:25.217 net/nfb: not in enabled drivers build config 00:02:25.217 net/nfp: not in enabled drivers build config 00:02:25.217 net/ngbe: not in enabled drivers build config 00:02:25.217 net/null: not in enabled drivers build config 00:02:25.217 net/octeontx: not in enabled drivers build config 00:02:25.217 net/octeon_ep: not in enabled drivers build config 00:02:25.217 net/pcap: not in enabled drivers build config 00:02:25.217 net/pfe: not in enabled drivers build config 00:02:25.217 net/qede: not in enabled drivers build config 00:02:25.217 net/ring: not in enabled drivers build config 00:02:25.217 net/sfc: not in enabled drivers build config 00:02:25.217 net/softnic: not in enabled drivers build config 00:02:25.217 net/tap: not in enabled drivers build config 00:02:25.217 net/thunderx: not in enabled drivers build config 00:02:25.217 net/txgbe: not in enabled drivers build config 00:02:25.217 net/vdev_netvsc: not in enabled drivers build config 00:02:25.217 net/vhost: not in enabled drivers build config 00:02:25.217 net/virtio: not in enabled drivers build config 00:02:25.217 net/vmxnet3: not in enabled drivers build config 00:02:25.217 raw/*: missing internal dependency, "rawdev" 00:02:25.217 crypto/armv8: not in enabled drivers build config 00:02:25.217 crypto/bcmfs: not in enabled drivers build config 00:02:25.217 crypto/caam_jr: not in enabled drivers build config 00:02:25.217 crypto/ccp: not in enabled drivers build config 00:02:25.217 crypto/cnxk: not in enabled drivers build config 00:02:25.217 crypto/dpaa_sec: not in enabled drivers build config 00:02:25.217 crypto/dpaa2_sec: not in enabled drivers build config 00:02:25.217 crypto/ipsec_mb: not in enabled drivers build config 00:02:25.217 crypto/mlx5: not in enabled drivers build config 00:02:25.217 crypto/mvsam: not in enabled drivers build config 00:02:25.217 crypto/nitrox: not in enabled drivers build config 00:02:25.217 crypto/null: not in enabled drivers build config 00:02:25.217 crypto/octeontx: not in enabled drivers build config 00:02:25.217 crypto/openssl: not in enabled drivers build config 00:02:25.217 crypto/scheduler: not in enabled drivers build config 00:02:25.217 crypto/uadk: not in enabled drivers build config 00:02:25.217 crypto/virtio: not in enabled drivers build config 00:02:25.217 compress/isal: not in enabled drivers build config 00:02:25.217 compress/mlx5: not in enabled drivers build config 00:02:25.217 compress/octeontx: not in enabled drivers build config 00:02:25.217 compress/zlib: not in enabled drivers build config 00:02:25.217 regex/*: missing internal dependency, "regexdev" 00:02:25.217 ml/*: missing internal dependency, "mldev" 00:02:25.217 vdpa/ifc: not in enabled drivers build config 00:02:25.217 vdpa/mlx5: not in enabled drivers build config 00:02:25.217 vdpa/nfp: not in enabled drivers build config 00:02:25.217 vdpa/sfc: not in enabled drivers build config 00:02:25.217 event/*: missing internal dependency, "eventdev" 00:02:25.217 baseband/*: missing internal dependency, "bbdev" 00:02:25.217 gpu/*: missing internal dependency, "gpudev" 00:02:25.217 00:02:25.217 00:02:25.479 Build targets in project: 84 00:02:25.479 00:02:25.479 DPDK 23.11.0 00:02:25.479 00:02:25.479 User defined options 00:02:25.479 buildtype : debug 00:02:25.479 default_library : shared 00:02:25.479 libdir : lib 00:02:25.479 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.479 b_sanitize : address 00:02:25.479 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:25.479 c_link_args : 00:02:25.479 cpu_instruction_set: native 00:02:25.479 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:25.479 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:25.479 enable_docs : false 00:02:25.479 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:25.479 enable_kmods : false 00:02:25.479 tests : false 00:02:25.479 00:02:25.479 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.051 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:26.051 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:26.051 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.051 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.051 [4/264] Linking static target lib/librte_kvargs.a 00:02:26.051 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.051 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.051 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:26.051 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.051 [9/264] Linking static target lib/librte_log.a 00:02:26.051 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.312 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.312 [12/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.312 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.312 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.571 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.571 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.571 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.571 [18/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:26.571 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.571 [20/264] Linking static target lib/librte_telemetry.a 00:02:26.571 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.831 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.831 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.832 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.832 [25/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.832 [26/264] Linking target lib/librte_log.so.24.0 00:02:26.832 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.832 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.099 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.099 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:27.099 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.099 [32/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:27.099 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.099 [34/264] Linking target lib/librte_kvargs.so.24.0 00:02:27.099 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.357 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.357 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.357 [38/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:27.357 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.357 [40/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.357 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.357 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.357 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.357 [44/264] Linking target lib/librte_telemetry.so.24.0 00:02:27.357 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.357 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.357 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.615 [48/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:27.615 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.615 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.615 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:27.873 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.873 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.873 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.873 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.873 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.873 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.873 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.873 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.873 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.873 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.131 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.131 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.131 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.131 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.131 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.131 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.131 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.390 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.390 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:28.390 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:28.390 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.390 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.390 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.390 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.390 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.390 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.648 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.648 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.648 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.648 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.906 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.906 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.906 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.906 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.906 [86/264] Linking static target lib/librte_ring.a 00:02:28.906 [87/264] Linking static target lib/librte_eal.a 00:02:28.906 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.165 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.165 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:29.165 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.165 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.165 [93/264] Linking static target lib/librte_mempool.a 00:02:29.423 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.423 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.423 [96/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.423 [97/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.423 [98/264] Linking static target lib/librte_rcu.a 00:02:29.681 [99/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:29.681 [100/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:29.681 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.681 [102/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.681 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:29.939 [104/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.939 [105/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.939 [106/264] Linking static target lib/librte_net.a 00:02:29.939 [107/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.939 [108/264] Linking static target lib/librte_mbuf.a 00:02:29.939 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:29.939 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.198 [111/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.198 [112/264] Linking static target lib/librte_meter.a 00:02:30.198 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.198 [114/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.198 [115/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.198 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.456 [117/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.714 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.714 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.973 [120/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.973 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.973 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.973 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.230 [124/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.230 [125/264] Linking static target lib/librte_pci.a 00:02:31.230 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.230 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.230 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.230 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.230 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.230 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.489 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.489 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.489 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.489 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.489 [136/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.489 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.489 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.489 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.489 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.489 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.489 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.747 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.747 [144/264] Linking static target lib/librte_cmdline.a 00:02:31.747 [145/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.005 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.005 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.005 [148/264] Linking static target lib/librte_timer.a 00:02:32.005 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.263 [150/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.263 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.521 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.521 [153/264] Linking static target lib/librte_compressdev.a 00:02:32.521 [154/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.521 [155/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.521 [156/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.521 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.779 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.779 [159/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.779 [160/264] Linking static target lib/librte_hash.a 00:02:32.779 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:32.779 [162/264] Linking static target lib/librte_dmadev.a 00:02:32.779 [163/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.036 [164/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.036 [165/264] Linking static target lib/librte_ethdev.a 00:02:33.036 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.036 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:33.036 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.036 [169/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.036 [170/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.295 [171/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:33.295 [172/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:33.295 [173/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.295 [174/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.295 [175/264] Linking static target lib/librte_cryptodev.a 00:02:33.553 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:33.553 [177/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:33.553 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.553 [179/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:33.553 [180/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.811 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.811 [182/264] Linking static target lib/librte_power.a 00:02:33.811 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:33.811 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.811 [185/264] Linking static target lib/librte_reorder.a 00:02:33.811 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.070 [187/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.070 [188/264] Linking static target lib/librte_security.a 00:02:34.070 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:34.328 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.328 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:34.586 [192/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.586 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:34.586 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:34.586 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.843 [196/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.843 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:34.843 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:34.843 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.101 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.101 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.359 [202/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.359 [203/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.359 [204/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.359 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.359 [206/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.359 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.359 [208/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.359 [209/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.617 [210/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.617 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.617 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.617 [213/264] Linking static target drivers/librte_bus_vdev.a 00:02:35.617 [214/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.617 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.617 [216/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.617 [217/264] Linking static target drivers/librte_bus_pci.a 00:02:35.617 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.617 [219/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.617 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.617 [221/264] Linking static target drivers/librte_mempool_ring.a 00:02:35.875 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.875 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.441 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.374 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.374 [226/264] Linking target lib/librte_eal.so.24.0 00:02:37.374 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:37.374 [228/264] Linking target lib/librte_timer.so.24.0 00:02:37.374 [229/264] Linking target lib/librte_ring.so.24.0 00:02:37.374 [230/264] Linking target lib/librte_meter.so.24.0 00:02:37.374 [231/264] Linking target lib/librte_dmadev.so.24.0 00:02:37.374 [232/264] Linking target lib/librte_pci.so.24.0 00:02:37.374 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:37.632 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:37.632 [235/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:37.632 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:37.632 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:37.632 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:37.632 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:37.632 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:37.632 [241/264] Linking target lib/librte_rcu.so.24.0 00:02:37.632 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:37.632 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:37.632 [244/264] Linking target lib/librte_mbuf.so.24.0 00:02:37.632 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:37.891 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:37.891 [247/264] Linking target lib/librte_cryptodev.so.24.0 00:02:37.891 [248/264] Linking target lib/librte_reorder.so.24.0 00:02:37.891 [249/264] Linking target lib/librte_compressdev.so.24.0 00:02:37.891 [250/264] Linking target lib/librte_net.so.24.0 00:02:38.149 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:38.149 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:38.149 [253/264] Linking target lib/librte_hash.so.24.0 00:02:38.149 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:38.149 [255/264] Linking target lib/librte_security.so.24.0 00:02:38.149 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:38.716 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.716 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:38.974 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:38.974 [260/264] Linking target lib/librte_power.so.24.0 00:02:39.232 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.232 [262/264] Linking static target lib/librte_vhost.a 00:02:40.651 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.651 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:40.651 INFO: autodetecting backend as ninja 00:02:40.651 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:41.584 CC lib/log/log.o 00:02:41.584 CC lib/log/log_flags.o 00:02:41.584 CC lib/ut/ut.o 00:02:41.584 CC lib/log/log_deprecated.o 00:02:41.584 CC lib/ut_mock/mock.o 00:02:41.584 LIB libspdk_ut_mock.a 00:02:41.584 LIB libspdk_ut.a 00:02:41.584 LIB libspdk_log.a 00:02:41.584 SO libspdk_ut_mock.so.5.0 00:02:41.584 SO libspdk_ut.so.1.0 00:02:41.842 SO libspdk_log.so.6.1 00:02:41.842 SYMLINK libspdk_ut_mock.so 00:02:41.842 SYMLINK libspdk_ut.so 00:02:41.842 SYMLINK libspdk_log.so 00:02:41.842 CXX lib/trace_parser/trace.o 00:02:41.842 CC lib/ioat/ioat.o 00:02:41.842 CC lib/dma/dma.o 00:02:41.842 CC lib/util/base64.o 00:02:41.842 CC lib/util/bit_array.o 00:02:41.842 CC lib/util/cpuset.o 00:02:41.842 CC lib/util/crc32.o 00:02:41.842 CC lib/util/crc32c.o 00:02:41.842 CC lib/util/crc16.o 00:02:41.842 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.101 CC lib/util/crc32_ieee.o 00:02:42.101 CC lib/util/crc64.o 00:02:42.101 CC lib/util/dif.o 00:02:42.101 CC lib/util/fd.o 00:02:42.101 LIB libspdk_dma.a 00:02:42.101 SO libspdk_dma.so.3.0 00:02:42.101 CC lib/util/file.o 00:02:42.101 CC lib/util/hexlify.o 00:02:42.101 CC lib/util/iov.o 00:02:42.101 CC lib/util/math.o 00:02:42.101 SYMLINK libspdk_dma.so 00:02:42.101 CC lib/util/pipe.o 00:02:42.101 CC lib/util/strerror_tls.o 00:02:42.101 LIB libspdk_ioat.a 00:02:42.101 CC lib/vfio_user/host/vfio_user.o 00:02:42.101 SO libspdk_ioat.so.6.0 00:02:42.101 CC lib/util/string.o 00:02:42.101 CC lib/util/uuid.o 00:02:42.101 CC lib/util/fd_group.o 00:02:42.101 SYMLINK libspdk_ioat.so 00:02:42.101 CC lib/util/xor.o 00:02:42.359 CC lib/util/zipf.o 00:02:42.359 LIB libspdk_vfio_user.a 00:02:42.359 SO libspdk_vfio_user.so.4.0 00:02:42.359 SYMLINK libspdk_vfio_user.so 00:02:42.617 LIB libspdk_util.a 00:02:42.617 LIB libspdk_trace_parser.a 00:02:42.617 SO libspdk_trace_parser.so.4.0 00:02:42.617 SO libspdk_util.so.8.0 00:02:42.617 SYMLINK libspdk_trace_parser.so 00:02:42.876 SYMLINK libspdk_util.so 00:02:42.876 CC lib/vmd/vmd.o 00:02:42.876 CC lib/rdma/common.o 00:02:42.876 CC lib/rdma/rdma_verbs.o 00:02:42.876 CC lib/vmd/led.o 00:02:42.876 CC lib/env_dpdk/env.o 00:02:42.876 CC lib/env_dpdk/pci.o 00:02:42.876 CC lib/env_dpdk/memory.o 00:02:42.876 CC lib/conf/conf.o 00:02:42.876 CC lib/json/json_parse.o 00:02:42.876 CC lib/idxd/idxd.o 00:02:42.876 CC lib/idxd/idxd_user.o 00:02:43.134 CC lib/idxd/idxd_kernel.o 00:02:43.134 LIB libspdk_conf.a 00:02:43.134 CC lib/json/json_util.o 00:02:43.134 SO libspdk_conf.so.5.0 00:02:43.134 LIB libspdk_rdma.a 00:02:43.134 CC lib/json/json_write.o 00:02:43.134 SYMLINK libspdk_conf.so 00:02:43.134 CC lib/env_dpdk/init.o 00:02:43.134 SO libspdk_rdma.so.5.0 00:02:43.134 CC lib/env_dpdk/threads.o 00:02:43.134 SYMLINK libspdk_rdma.so 00:02:43.134 CC lib/env_dpdk/pci_ioat.o 00:02:43.134 CC lib/env_dpdk/pci_virtio.o 00:02:43.392 CC lib/env_dpdk/pci_vmd.o 00:02:43.392 CC lib/env_dpdk/pci_idxd.o 00:02:43.392 CC lib/env_dpdk/pci_event.o 00:02:43.392 CC lib/env_dpdk/sigbus_handler.o 00:02:43.392 CC lib/env_dpdk/pci_dpdk.o 00:02:43.392 LIB libspdk_json.a 00:02:43.392 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.392 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.392 SO libspdk_json.so.5.1 00:02:43.392 LIB libspdk_idxd.a 00:02:43.392 SYMLINK libspdk_json.so 00:02:43.392 SO libspdk_idxd.so.11.0 00:02:43.392 LIB libspdk_vmd.a 00:02:43.650 SYMLINK libspdk_idxd.so 00:02:43.650 SO libspdk_vmd.so.5.0 00:02:43.650 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.650 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.650 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.650 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.650 SYMLINK libspdk_vmd.so 00:02:43.909 LIB libspdk_jsonrpc.a 00:02:43.909 SO libspdk_jsonrpc.so.5.1 00:02:43.909 SYMLINK libspdk_jsonrpc.so 00:02:43.909 LIB libspdk_env_dpdk.a 00:02:44.167 CC lib/rpc/rpc.o 00:02:44.167 SO libspdk_env_dpdk.so.13.0 00:02:44.167 SYMLINK libspdk_env_dpdk.so 00:02:44.167 LIB libspdk_rpc.a 00:02:44.426 SO libspdk_rpc.so.5.0 00:02:44.426 SYMLINK libspdk_rpc.so 00:02:44.426 CC lib/notify/notify.o 00:02:44.426 CC lib/notify/notify_rpc.o 00:02:44.426 CC lib/trace/trace_flags.o 00:02:44.426 CC lib/trace/trace.o 00:02:44.426 CC lib/trace/trace_rpc.o 00:02:44.426 CC lib/sock/sock.o 00:02:44.426 CC lib/sock/sock_rpc.o 00:02:44.684 LIB libspdk_notify.a 00:02:44.684 SO libspdk_notify.so.5.0 00:02:44.684 SYMLINK libspdk_notify.so 00:02:44.684 LIB libspdk_trace.a 00:02:44.684 SO libspdk_trace.so.9.0 00:02:44.684 SYMLINK libspdk_trace.so 00:02:44.684 LIB libspdk_sock.a 00:02:44.941 SO libspdk_sock.so.8.0 00:02:44.941 SYMLINK libspdk_sock.so 00:02:44.941 CC lib/thread/thread.o 00:02:44.941 CC lib/thread/iobuf.o 00:02:44.941 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:44.941 CC lib/nvme/nvme_ctrlr.o 00:02:44.941 CC lib/nvme/nvme_fabric.o 00:02:44.941 CC lib/nvme/nvme_qpair.o 00:02:44.941 CC lib/nvme/nvme_ns_cmd.o 00:02:44.941 CC lib/nvme/nvme_ns.o 00:02:44.941 CC lib/nvme/nvme_pcie_common.o 00:02:44.941 CC lib/nvme/nvme_pcie.o 00:02:45.199 CC lib/nvme/nvme.o 00:02:45.458 CC lib/nvme/nvme_quirks.o 00:02:45.716 CC lib/nvme/nvme_transport.o 00:02:45.716 CC lib/nvme/nvme_discovery.o 00:02:45.716 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:45.716 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:45.975 CC lib/nvme/nvme_tcp.o 00:02:45.975 CC lib/nvme/nvme_opal.o 00:02:45.975 CC lib/nvme/nvme_io_msg.o 00:02:45.975 CC lib/nvme/nvme_poll_group.o 00:02:46.234 LIB libspdk_thread.a 00:02:46.234 SO libspdk_thread.so.9.0 00:02:46.234 CC lib/nvme/nvme_zns.o 00:02:46.234 CC lib/nvme/nvme_cuse.o 00:02:46.234 SYMLINK libspdk_thread.so 00:02:46.234 CC lib/nvme/nvme_vfio_user.o 00:02:46.234 CC lib/nvme/nvme_rdma.o 00:02:46.492 CC lib/accel/accel.o 00:02:46.492 CC lib/accel/accel_rpc.o 00:02:46.492 CC lib/accel/accel_sw.o 00:02:46.492 CC lib/blob/blobstore.o 00:02:46.751 CC lib/init/json_config.o 00:02:46.751 CC lib/blob/request.o 00:02:46.751 CC lib/blob/zeroes.o 00:02:46.751 CC lib/init/subsystem.o 00:02:46.751 CC lib/init/subsystem_rpc.o 00:02:47.010 CC lib/blob/blob_bs_dev.o 00:02:47.010 CC lib/init/rpc.o 00:02:47.010 CC lib/virtio/virtio_pci.o 00:02:47.010 CC lib/virtio/virtio.o 00:02:47.010 CC lib/virtio/virtio_vhost_user.o 00:02:47.010 CC lib/virtio/virtio_vfio_user.o 00:02:47.010 LIB libspdk_init.a 00:02:47.010 SO libspdk_init.so.4.0 00:02:47.269 SYMLINK libspdk_init.so 00:02:47.269 LIB libspdk_accel.a 00:02:47.269 CC lib/event/reactor.o 00:02:47.269 CC lib/event/app.o 00:02:47.269 SO libspdk_accel.so.14.0 00:02:47.269 CC lib/event/log_rpc.o 00:02:47.269 CC lib/event/app_rpc.o 00:02:47.269 CC lib/event/scheduler_static.o 00:02:47.269 SYMLINK libspdk_accel.so 00:02:47.269 LIB libspdk_virtio.a 00:02:47.269 SO libspdk_virtio.so.6.0 00:02:47.528 LIB libspdk_nvme.a 00:02:47.528 CC lib/bdev/bdev.o 00:02:47.528 CC lib/bdev/bdev_rpc.o 00:02:47.528 CC lib/bdev/part.o 00:02:47.528 CC lib/bdev/bdev_zone.o 00:02:47.528 SYMLINK libspdk_virtio.so 00:02:47.528 CC lib/bdev/scsi_nvme.o 00:02:47.528 SO libspdk_nvme.so.12.0 00:02:47.787 LIB libspdk_event.a 00:02:47.787 SYMLINK libspdk_nvme.so 00:02:47.787 SO libspdk_event.so.12.0 00:02:47.787 SYMLINK libspdk_event.so 00:02:49.162 LIB libspdk_blob.a 00:02:49.162 SO libspdk_blob.so.10.1 00:02:49.162 SYMLINK libspdk_blob.so 00:02:49.420 CC lib/blobfs/blobfs.o 00:02:49.420 CC lib/blobfs/tree.o 00:02:49.420 CC lib/lvol/lvol.o 00:02:49.986 LIB libspdk_bdev.a 00:02:49.986 SO libspdk_bdev.so.14.0 00:02:49.986 SYMLINK libspdk_bdev.so 00:02:50.244 LIB libspdk_blobfs.a 00:02:50.244 SO libspdk_blobfs.so.9.0 00:02:50.244 CC lib/nvmf/ctrlr.o 00:02:50.244 CC lib/nvmf/ctrlr_discovery.o 00:02:50.244 CC lib/nvmf/ctrlr_bdev.o 00:02:50.244 CC lib/ftl/ftl_core.o 00:02:50.244 CC lib/ublk/ublk.o 00:02:50.244 CC lib/nbd/nbd.o 00:02:50.244 CC lib/ublk/ublk_rpc.o 00:02:50.244 CC lib/scsi/dev.o 00:02:50.244 SYMLINK libspdk_blobfs.so 00:02:50.244 CC lib/nvmf/subsystem.o 00:02:50.244 LIB libspdk_lvol.a 00:02:50.244 SO libspdk_lvol.so.9.1 00:02:50.244 CC lib/ftl/ftl_init.o 00:02:50.244 SYMLINK libspdk_lvol.so 00:02:50.244 CC lib/ftl/ftl_layout.o 00:02:50.501 CC lib/scsi/lun.o 00:02:50.501 CC lib/scsi/port.o 00:02:50.501 CC lib/ftl/ftl_debug.o 00:02:50.501 CC lib/ftl/ftl_io.o 00:02:50.501 CC lib/nbd/nbd_rpc.o 00:02:50.501 CC lib/nvmf/nvmf.o 00:02:50.501 CC lib/nvmf/nvmf_rpc.o 00:02:50.501 CC lib/nvmf/transport.o 00:02:50.758 CC lib/scsi/scsi.o 00:02:50.758 LIB libspdk_nbd.a 00:02:50.758 LIB libspdk_ublk.a 00:02:50.758 SO libspdk_nbd.so.6.0 00:02:50.758 CC lib/ftl/ftl_sb.o 00:02:50.758 SO libspdk_ublk.so.2.0 00:02:50.758 SYMLINK libspdk_nbd.so 00:02:50.758 CC lib/ftl/ftl_l2p.o 00:02:50.758 SYMLINK libspdk_ublk.so 00:02:50.758 CC lib/scsi/scsi_bdev.o 00:02:50.758 CC lib/scsi/scsi_pr.o 00:02:50.758 CC lib/scsi/scsi_rpc.o 00:02:50.758 CC lib/scsi/task.o 00:02:51.015 CC lib/ftl/ftl_l2p_flat.o 00:02:51.015 CC lib/nvmf/tcp.o 00:02:51.015 CC lib/nvmf/rdma.o 00:02:51.015 CC lib/ftl/ftl_nv_cache.o 00:02:51.016 CC lib/ftl/ftl_band.o 00:02:51.274 LIB libspdk_scsi.a 00:02:51.274 CC lib/ftl/ftl_band_ops.o 00:02:51.274 CC lib/ftl/ftl_writer.o 00:02:51.274 SO libspdk_scsi.so.8.0 00:02:51.274 CC lib/ftl/ftl_rq.o 00:02:51.274 SYMLINK libspdk_scsi.so 00:02:51.532 CC lib/ftl/ftl_reloc.o 00:02:51.532 CC lib/iscsi/conn.o 00:02:51.532 CC lib/ftl/ftl_l2p_cache.o 00:02:51.532 CC lib/ftl/ftl_p2l.o 00:02:51.532 CC lib/ftl/mngt/ftl_mngt.o 00:02:51.532 CC lib/vhost/vhost.o 00:02:51.532 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.790 CC lib/iscsi/init_grp.o 00:02:51.790 CC lib/iscsi/iscsi.o 00:02:51.790 CC lib/vhost/vhost_rpc.o 00:02:51.790 CC lib/vhost/vhost_scsi.o 00:02:51.790 CC lib/vhost/vhost_blk.o 00:02:51.790 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.047 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.047 CC lib/vhost/rte_vhost_user.o 00:02:52.047 CC lib/iscsi/md5.o 00:02:52.047 CC lib/iscsi/param.o 00:02:52.047 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.047 CC lib/iscsi/portal_grp.o 00:02:52.048 CC lib/iscsi/tgt_node.o 00:02:52.305 CC lib/iscsi/iscsi_subsystem.o 00:02:52.305 CC lib/iscsi/iscsi_rpc.o 00:02:52.305 CC lib/iscsi/task.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.563 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.563 CC lib/ftl/utils/ftl_conf.o 00:02:52.563 CC lib/ftl/utils/ftl_md.o 00:02:52.563 LIB libspdk_nvmf.a 00:02:52.563 CC lib/ftl/utils/ftl_mempool.o 00:02:52.563 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.822 CC lib/ftl/utils/ftl_property.o 00:02:52.822 LIB libspdk_vhost.a 00:02:52.822 SO libspdk_nvmf.so.17.0 00:02:52.822 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.822 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.822 SO libspdk_vhost.so.7.1 00:02:52.822 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.822 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.822 SYMLINK libspdk_vhost.so 00:02:52.822 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.822 SYMLINK libspdk_nvmf.so 00:02:52.822 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.822 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.822 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.822 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.822 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.081 CC lib/ftl/base/ftl_base_dev.o 00:02:53.081 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.081 CC lib/ftl/ftl_trace.o 00:02:53.081 LIB libspdk_ftl.a 00:02:53.081 LIB libspdk_iscsi.a 00:02:53.339 SO libspdk_iscsi.so.7.0 00:02:53.339 SO libspdk_ftl.so.8.0 00:02:53.339 SYMLINK libspdk_iscsi.so 00:02:53.597 SYMLINK libspdk_ftl.so 00:02:53.597 CC module/env_dpdk/env_dpdk_rpc.o 00:02:53.597 CC module/accel/dsa/accel_dsa.o 00:02:53.597 CC module/blob/bdev/blob_bdev.o 00:02:53.855 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:53.855 CC module/sock/posix/posix.o 00:02:53.855 CC module/accel/iaa/accel_iaa.o 00:02:53.855 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.855 CC module/accel/error/accel_error.o 00:02:53.855 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.855 CC module/accel/ioat/accel_ioat.o 00:02:53.855 LIB libspdk_env_dpdk_rpc.a 00:02:53.855 SO libspdk_env_dpdk_rpc.so.5.0 00:02:53.855 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.855 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:53.855 SYMLINK libspdk_env_dpdk_rpc.so 00:02:53.855 LIB libspdk_scheduler_gscheduler.a 00:02:53.855 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.855 CC module/accel/error/accel_error_rpc.o 00:02:53.856 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.856 SO libspdk_scheduler_gscheduler.so.3.0 00:02:53.856 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.856 LIB libspdk_scheduler_dynamic.a 00:02:53.856 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.856 LIB libspdk_blob_bdev.a 00:02:53.856 SO libspdk_scheduler_dynamic.so.3.0 00:02:53.856 SO libspdk_blob_bdev.so.10.1 00:02:53.856 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.856 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.114 SYMLINK libspdk_blob_bdev.so 00:02:54.114 LIB libspdk_accel_ioat.a 00:02:54.114 LIB libspdk_accel_iaa.a 00:02:54.114 LIB libspdk_accel_dsa.a 00:02:54.114 LIB libspdk_accel_error.a 00:02:54.114 SO libspdk_accel_ioat.so.5.0 00:02:54.114 SO libspdk_accel_iaa.so.2.0 00:02:54.114 SO libspdk_accel_error.so.1.0 00:02:54.114 SO libspdk_accel_dsa.so.4.0 00:02:54.114 SYMLINK libspdk_accel_ioat.so 00:02:54.114 SYMLINK libspdk_accel_iaa.so 00:02:54.114 SYMLINK libspdk_accel_dsa.so 00:02:54.114 SYMLINK libspdk_accel_error.so 00:02:54.114 CC module/bdev/gpt/gpt.o 00:02:54.114 CC module/bdev/error/vbdev_error.o 00:02:54.114 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.114 CC module/bdev/delay/vbdev_delay.o 00:02:54.114 CC module/bdev/null/bdev_null.o 00:02:54.114 CC module/bdev/malloc/bdev_malloc.o 00:02:54.114 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.114 CC module/bdev/nvme/bdev_nvme.o 00:02:54.114 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.372 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.372 CC module/bdev/gpt/vbdev_gpt.o 00:02:54.372 CC module/bdev/null/bdev_null_rpc.o 00:02:54.372 LIB libspdk_blobfs_bdev.a 00:02:54.372 CC module/bdev/error/vbdev_error_rpc.o 00:02:54.372 SO libspdk_blobfs_bdev.so.5.0 00:02:54.372 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.372 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.372 LIB libspdk_sock_posix.a 00:02:54.372 SYMLINK libspdk_blobfs_bdev.so 00:02:54.372 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.372 LIB libspdk_bdev_null.a 00:02:54.372 SO libspdk_sock_posix.so.5.0 00:02:54.372 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.372 SO libspdk_bdev_null.so.5.0 00:02:54.629 LIB libspdk_bdev_error.a 00:02:54.629 SO libspdk_bdev_error.so.5.0 00:02:54.629 LIB libspdk_bdev_passthru.a 00:02:54.629 SYMLINK libspdk_bdev_null.so 00:02:54.629 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.629 SYMLINK libspdk_sock_posix.so 00:02:54.629 LIB libspdk_bdev_gpt.a 00:02:54.629 SO libspdk_bdev_passthru.so.5.0 00:02:54.629 LIB libspdk_bdev_malloc.a 00:02:54.629 SYMLINK libspdk_bdev_error.so 00:02:54.629 SO libspdk_bdev_gpt.so.5.0 00:02:54.629 CC module/bdev/nvme/nvme_rpc.o 00:02:54.629 SO libspdk_bdev_malloc.so.5.0 00:02:54.629 SYMLINK libspdk_bdev_passthru.so 00:02:54.629 CC module/bdev/nvme/bdev_mdns_client.o 00:02:54.629 LIB libspdk_bdev_delay.a 00:02:54.629 CC module/bdev/raid/bdev_raid.o 00:02:54.629 SYMLINK libspdk_bdev_gpt.so 00:02:54.629 SYMLINK libspdk_bdev_malloc.so 00:02:54.629 SO libspdk_bdev_delay.so.5.0 00:02:54.629 SYMLINK libspdk_bdev_delay.so 00:02:54.629 LIB libspdk_bdev_lvol.a 00:02:54.629 CC module/bdev/split/vbdev_split.o 00:02:54.629 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.629 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.629 SO libspdk_bdev_lvol.so.5.0 00:02:54.898 CC module/bdev/xnvme/bdev_xnvme.o 00:02:54.898 SYMLINK libspdk_bdev_lvol.so 00:02:54.898 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:54.898 CC module/bdev/aio/bdev_aio.o 00:02:54.898 CC module/bdev/aio/bdev_aio_rpc.o 00:02:54.898 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.898 LIB libspdk_bdev_split.a 00:02:54.898 SO libspdk_bdev_split.so.5.0 00:02:54.898 CC module/bdev/nvme/vbdev_opal.o 00:02:54.898 SYMLINK libspdk_bdev_split.so 00:02:54.898 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.898 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.898 LIB libspdk_bdev_xnvme.a 00:02:54.898 SO libspdk_bdev_xnvme.so.2.0 00:02:55.172 CC module/bdev/ftl/bdev_ftl.o 00:02:55.172 SYMLINK libspdk_bdev_xnvme.so 00:02:55.172 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:55.172 LIB libspdk_bdev_zone_block.a 00:02:55.172 SO libspdk_bdev_zone_block.so.5.0 00:02:55.172 LIB libspdk_bdev_aio.a 00:02:55.172 SO libspdk_bdev_aio.so.5.0 00:02:55.172 SYMLINK libspdk_bdev_zone_block.so 00:02:55.172 CC module/bdev/raid/raid0.o 00:02:55.172 SYMLINK libspdk_bdev_aio.so 00:02:55.172 CC module/bdev/raid/raid1.o 00:02:55.172 CC module/bdev/raid/concat.o 00:02:55.172 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:55.172 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:55.172 CC module/bdev/iscsi/bdev_iscsi.o 00:02:55.172 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.172 LIB libspdk_bdev_ftl.a 00:02:55.172 SO libspdk_bdev_ftl.so.5.0 00:02:55.432 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.432 SYMLINK libspdk_bdev_ftl.so 00:02:55.432 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:55.432 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.432 LIB libspdk_bdev_raid.a 00:02:55.432 SO libspdk_bdev_raid.so.5.0 00:02:55.432 SYMLINK libspdk_bdev_raid.so 00:02:55.690 LIB libspdk_bdev_iscsi.a 00:02:55.690 SO libspdk_bdev_iscsi.so.5.0 00:02:55.690 SYMLINK libspdk_bdev_iscsi.so 00:02:55.690 LIB libspdk_bdev_virtio.a 00:02:55.690 SO libspdk_bdev_virtio.so.5.0 00:02:55.948 SYMLINK libspdk_bdev_virtio.so 00:02:56.513 LIB libspdk_bdev_nvme.a 00:02:56.513 SO libspdk_bdev_nvme.so.6.0 00:02:56.513 SYMLINK libspdk_bdev_nvme.so 00:02:56.771 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:56.771 CC module/event/subsystems/sock/sock.o 00:02:56.771 CC module/event/subsystems/vmd/vmd.o 00:02:56.771 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:56.771 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:56.771 CC module/event/subsystems/iobuf/iobuf.o 00:02:56.771 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.030 LIB libspdk_event_sock.a 00:02:57.030 LIB libspdk_event_vhost_blk.a 00:02:57.030 SO libspdk_event_sock.so.4.0 00:02:57.030 SO libspdk_event_vhost_blk.so.2.0 00:02:57.030 LIB libspdk_event_scheduler.a 00:02:57.030 LIB libspdk_event_iobuf.a 00:02:57.030 SO libspdk_event_scheduler.so.3.0 00:02:57.030 SO libspdk_event_iobuf.so.2.0 00:02:57.030 SYMLINK libspdk_event_sock.so 00:02:57.030 SYMLINK libspdk_event_vhost_blk.so 00:02:57.030 LIB libspdk_event_vmd.a 00:02:57.030 SO libspdk_event_vmd.so.5.0 00:02:57.030 SYMLINK libspdk_event_scheduler.so 00:02:57.030 SYMLINK libspdk_event_iobuf.so 00:02:57.030 SYMLINK libspdk_event_vmd.so 00:02:57.288 CC module/event/subsystems/accel/accel.o 00:02:57.288 LIB libspdk_event_accel.a 00:02:57.288 SO libspdk_event_accel.so.5.0 00:02:57.546 SYMLINK libspdk_event_accel.so 00:02:57.546 CC module/event/subsystems/bdev/bdev.o 00:02:57.804 LIB libspdk_event_bdev.a 00:02:57.804 SO libspdk_event_bdev.so.5.0 00:02:57.804 SYMLINK libspdk_event_bdev.so 00:02:57.804 CC module/event/subsystems/nbd/nbd.o 00:02:57.804 CC module/event/subsystems/scsi/scsi.o 00:02:57.804 CC module/event/subsystems/ublk/ublk.o 00:02:58.062 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:58.062 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:58.062 LIB libspdk_event_nbd.a 00:02:58.062 LIB libspdk_event_ublk.a 00:02:58.062 LIB libspdk_event_scsi.a 00:02:58.062 SO libspdk_event_nbd.so.5.0 00:02:58.062 SO libspdk_event_ublk.so.2.0 00:02:58.062 SO libspdk_event_scsi.so.5.0 00:02:58.062 SYMLINK libspdk_event_nbd.so 00:02:58.062 SYMLINK libspdk_event_ublk.so 00:02:58.062 SYMLINK libspdk_event_scsi.so 00:02:58.062 LIB libspdk_event_nvmf.a 00:02:58.062 SO libspdk_event_nvmf.so.5.0 00:02:58.320 SYMLINK libspdk_event_nvmf.so 00:02:58.320 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.320 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.320 LIB libspdk_event_vhost_scsi.a 00:02:58.320 SO libspdk_event_vhost_scsi.so.2.0 00:02:58.320 LIB libspdk_event_iscsi.a 00:02:58.320 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.320 SO libspdk_event_iscsi.so.5.0 00:02:58.577 SYMLINK libspdk_event_iscsi.so 00:02:58.577 SO libspdk.so.5.0 00:02:58.577 SYMLINK libspdk.so 00:02:58.577 CXX app/trace/trace.o 00:02:58.577 CC app/trace_record/trace_record.o 00:02:58.835 CC app/nvmf_tgt/nvmf_main.o 00:02:58.835 CC examples/ioat/perf/perf.o 00:02:58.835 CC examples/accel/perf/accel_perf.o 00:02:58.835 CC test/accel/dif/dif.o 00:02:58.835 CC test/bdev/bdevio/bdevio.o 00:02:58.835 CC examples/blob/hello_world/hello_blob.o 00:02:58.835 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.835 CC test/app/bdev_svc/bdev_svc.o 00:02:58.835 LINK spdk_trace_record 00:02:58.835 LINK ioat_perf 00:02:58.835 LINK bdev_svc 00:02:58.835 LINK hello_blob 00:02:59.093 LINK nvmf_tgt 00:02:59.093 LINK hello_bdev 00:02:59.093 LINK spdk_trace 00:02:59.093 LINK dif 00:02:59.093 CC examples/ioat/verify/verify.o 00:02:59.093 LINK bdevio 00:02:59.093 CC test/blobfs/mkfs/mkfs.o 00:02:59.093 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:59.093 CC examples/blob/cli/blobcli.o 00:02:59.093 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.093 LINK accel_perf 00:02:59.351 TEST_HEADER include/spdk/accel.h 00:02:59.351 TEST_HEADER include/spdk/accel_module.h 00:02:59.351 TEST_HEADER include/spdk/assert.h 00:02:59.351 TEST_HEADER include/spdk/barrier.h 00:02:59.351 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.351 CC app/iscsi_tgt/iscsi_tgt.o 00:02:59.351 TEST_HEADER include/spdk/base64.h 00:02:59.351 TEST_HEADER include/spdk/bdev.h 00:02:59.351 TEST_HEADER include/spdk/bdev_module.h 00:02:59.351 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.351 TEST_HEADER include/spdk/bit_array.h 00:02:59.351 TEST_HEADER include/spdk/bit_pool.h 00:02:59.351 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.351 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.351 TEST_HEADER include/spdk/blobfs.h 00:02:59.351 TEST_HEADER include/spdk/blob.h 00:02:59.351 TEST_HEADER include/spdk/conf.h 00:02:59.351 TEST_HEADER include/spdk/config.h 00:02:59.351 TEST_HEADER include/spdk/cpuset.h 00:02:59.351 TEST_HEADER include/spdk/crc16.h 00:02:59.351 TEST_HEADER include/spdk/crc32.h 00:02:59.351 TEST_HEADER include/spdk/crc64.h 00:02:59.351 TEST_HEADER include/spdk/dif.h 00:02:59.351 LINK verify 00:02:59.351 TEST_HEADER include/spdk/dma.h 00:02:59.351 TEST_HEADER include/spdk/endian.h 00:02:59.351 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.351 TEST_HEADER include/spdk/env.h 00:02:59.351 TEST_HEADER include/spdk/event.h 00:02:59.351 TEST_HEADER include/spdk/fd_group.h 00:02:59.351 TEST_HEADER include/spdk/fd.h 00:02:59.351 TEST_HEADER include/spdk/file.h 00:02:59.351 TEST_HEADER include/spdk/ftl.h 00:02:59.351 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.351 TEST_HEADER include/spdk/hexlify.h 00:02:59.351 TEST_HEADER include/spdk/histogram_data.h 00:02:59.351 TEST_HEADER include/spdk/idxd.h 00:02:59.351 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.351 TEST_HEADER include/spdk/init.h 00:02:59.351 TEST_HEADER include/spdk/ioat.h 00:02:59.351 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.351 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.351 TEST_HEADER include/spdk/json.h 00:02:59.351 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.351 TEST_HEADER include/spdk/likely.h 00:02:59.351 TEST_HEADER include/spdk/log.h 00:02:59.351 TEST_HEADER include/spdk/lvol.h 00:02:59.351 TEST_HEADER include/spdk/memory.h 00:02:59.351 LINK mkfs 00:02:59.351 TEST_HEADER include/spdk/mmio.h 00:02:59.351 TEST_HEADER include/spdk/nbd.h 00:02:59.351 TEST_HEADER include/spdk/notify.h 00:02:59.351 TEST_HEADER include/spdk/nvme.h 00:02:59.351 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.351 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.351 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.351 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.351 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.351 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.351 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.351 TEST_HEADER include/spdk/nvmf.h 00:02:59.351 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.351 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.351 TEST_HEADER include/spdk/opal.h 00:02:59.351 TEST_HEADER include/spdk/opal_spec.h 00:02:59.351 TEST_HEADER include/spdk/pci_ids.h 00:02:59.351 TEST_HEADER include/spdk/pipe.h 00:02:59.351 TEST_HEADER include/spdk/queue.h 00:02:59.351 TEST_HEADER include/spdk/reduce.h 00:02:59.351 TEST_HEADER include/spdk/rpc.h 00:02:59.351 TEST_HEADER include/spdk/scheduler.h 00:02:59.351 TEST_HEADER include/spdk/scsi.h 00:02:59.351 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.351 TEST_HEADER include/spdk/sock.h 00:02:59.351 TEST_HEADER include/spdk/stdinc.h 00:02:59.351 TEST_HEADER include/spdk/string.h 00:02:59.351 TEST_HEADER include/spdk/thread.h 00:02:59.351 TEST_HEADER include/spdk/trace.h 00:02:59.351 TEST_HEADER include/spdk/trace_parser.h 00:02:59.351 TEST_HEADER include/spdk/tree.h 00:02:59.351 TEST_HEADER include/spdk/ublk.h 00:02:59.351 TEST_HEADER include/spdk/util.h 00:02:59.351 TEST_HEADER include/spdk/uuid.h 00:02:59.351 TEST_HEADER include/spdk/version.h 00:02:59.351 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.351 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.351 TEST_HEADER include/spdk/vhost.h 00:02:59.351 TEST_HEADER include/spdk/vmd.h 00:02:59.351 TEST_HEADER include/spdk/xor.h 00:02:59.351 TEST_HEADER include/spdk/zipf.h 00:02:59.351 CXX test/cpp_headers/accel.o 00:02:59.351 CC test/dma/test_dma/test_dma.o 00:02:59.351 LINK iscsi_tgt 00:02:59.351 CC test/event/event_perf/event_perf.o 00:02:59.351 CC test/env/mem_callbacks/mem_callbacks.o 00:02:59.610 CC test/env/vtophys/vtophys.o 00:02:59.610 CXX test/cpp_headers/accel_module.o 00:02:59.610 LINK nvme_fuzz 00:02:59.610 LINK event_perf 00:02:59.610 LINK vtophys 00:02:59.610 LINK blobcli 00:02:59.610 CC app/spdk_tgt/spdk_tgt.o 00:02:59.610 CXX test/cpp_headers/assert.o 00:02:59.610 CC test/event/reactor/reactor.o 00:02:59.610 CC app/spdk_lspci/spdk_lspci.o 00:02:59.610 LINK test_dma 00:02:59.868 CC app/spdk_nvme_perf/perf.o 00:02:59.868 CXX test/cpp_headers/barrier.o 00:02:59.868 LINK spdk_tgt 00:02:59.868 LINK reactor 00:02:59.868 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:59.868 LINK spdk_lspci 00:02:59.868 CXX test/cpp_headers/base64.o 00:02:59.868 CXX test/cpp_headers/bdev.o 00:02:59.868 CXX test/cpp_headers/bdev_module.o 00:02:59.868 LINK env_dpdk_post_init 00:02:59.868 LINK mem_callbacks 00:03:00.126 CC test/event/reactor_perf/reactor_perf.o 00:03:00.126 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:00.126 LINK bdevperf 00:03:00.126 CC test/app/histogram_perf/histogram_perf.o 00:03:00.126 CXX test/cpp_headers/bdev_zone.o 00:03:00.126 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:00.126 CC test/app/jsoncat/jsoncat.o 00:03:00.126 LINK reactor_perf 00:03:00.126 CC test/env/memory/memory_ut.o 00:03:00.126 CC test/lvol/esnap/esnap.o 00:03:00.126 LINK histogram_perf 00:03:00.126 LINK jsoncat 00:03:00.384 CC examples/nvme/hello_world/hello_world.o 00:03:00.384 CXX test/cpp_headers/bit_array.o 00:03:00.384 CC test/event/app_repeat/app_repeat.o 00:03:00.384 CC test/app/stub/stub.o 00:03:00.384 LINK spdk_nvme_perf 00:03:00.384 CXX test/cpp_headers/bit_pool.o 00:03:00.384 CC examples/nvme/reconnect/reconnect.o 00:03:00.384 LINK app_repeat 00:03:00.384 LINK hello_world 00:03:00.642 LINK stub 00:03:00.642 LINK vhost_fuzz 00:03:00.642 CXX test/cpp_headers/blob_bdev.o 00:03:00.642 CC app/spdk_nvme_identify/identify.o 00:03:00.642 LINK iscsi_fuzz 00:03:00.642 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.642 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.643 CC test/event/scheduler/scheduler.o 00:03:00.643 LINK reconnect 00:03:00.643 CC test/env/pci/pci_ut.o 00:03:00.901 CXX test/cpp_headers/blobfs.o 00:03:00.901 LINK memory_ut 00:03:00.901 CC examples/sock/hello_world/hello_sock.o 00:03:00.901 LINK scheduler 00:03:00.901 LINK spdk_nvme_discover 00:03:00.901 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.901 CC examples/nvme/arbitration/arbitration.o 00:03:00.901 CXX test/cpp_headers/blob.o 00:03:00.901 CC examples/nvme/hotplug/hotplug.o 00:03:00.901 LINK pci_ut 00:03:00.901 CXX test/cpp_headers/conf.o 00:03:00.901 CC app/spdk_top/spdk_top.o 00:03:01.158 LINK hello_sock 00:03:01.158 CC test/nvme/aer/aer.o 00:03:01.158 LINK hotplug 00:03:01.158 CXX test/cpp_headers/config.o 00:03:01.158 LINK arbitration 00:03:01.158 CXX test/cpp_headers/cpuset.o 00:03:01.158 LINK nvme_manage 00:03:01.417 CC test/rpc_client/rpc_client_test.o 00:03:01.417 CXX test/cpp_headers/crc16.o 00:03:01.417 CC test/thread/poller_perf/poller_perf.o 00:03:01.417 LINK aer 00:03:01.417 CC app/vhost/vhost.o 00:03:01.417 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.417 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:01.417 LINK spdk_nvme_identify 00:03:01.417 CXX test/cpp_headers/crc32.o 00:03:01.417 LINK rpc_client_test 00:03:01.417 LINK poller_perf 00:03:01.417 LINK vhost 00:03:01.417 LINK lsvmd 00:03:01.685 CXX test/cpp_headers/crc64.o 00:03:01.685 LINK cmb_copy 00:03:01.685 CC test/nvme/reset/reset.o 00:03:01.685 CC examples/util/zipf/zipf.o 00:03:01.685 CXX test/cpp_headers/dif.o 00:03:01.685 CC examples/vmd/led/led.o 00:03:01.685 CC examples/nvmf/nvmf/nvmf.o 00:03:01.685 CC examples/thread/thread/thread_ex.o 00:03:01.685 CC examples/nvme/abort/abort.o 00:03:01.685 CC examples/idxd/perf/perf.o 00:03:01.685 LINK led 00:03:01.952 LINK zipf 00:03:01.952 CXX test/cpp_headers/dma.o 00:03:01.952 LINK reset 00:03:01.952 CXX test/cpp_headers/endian.o 00:03:01.952 CXX test/cpp_headers/env_dpdk.o 00:03:01.952 LINK nvmf 00:03:01.952 LINK spdk_top 00:03:01.952 LINK thread 00:03:01.952 CC test/nvme/sgl/sgl.o 00:03:01.952 CXX test/cpp_headers/env.o 00:03:01.952 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.210 CXX test/cpp_headers/event.o 00:03:02.210 CXX test/cpp_headers/fd_group.o 00:03:02.210 CXX test/cpp_headers/fd.o 00:03:02.210 LINK idxd_perf 00:03:02.210 CXX test/cpp_headers/file.o 00:03:02.210 LINK abort 00:03:02.210 CC app/spdk_dd/spdk_dd.o 00:03:02.210 LINK interrupt_tgt 00:03:02.211 CXX test/cpp_headers/ftl.o 00:03:02.211 CXX test/cpp_headers/gpt_spec.o 00:03:02.211 LINK sgl 00:03:02.211 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:02.211 CXX test/cpp_headers/hexlify.o 00:03:02.211 CC test/nvme/e2edp/nvme_dp.o 00:03:02.470 CXX test/cpp_headers/histogram_data.o 00:03:02.470 CC app/fio/nvme/fio_plugin.o 00:03:02.470 CXX test/cpp_headers/idxd.o 00:03:02.470 LINK pmr_persistence 00:03:02.470 CXX test/cpp_headers/idxd_spec.o 00:03:02.470 CC test/nvme/overhead/overhead.o 00:03:02.470 CC app/fio/bdev/fio_plugin.o 00:03:02.470 LINK spdk_dd 00:03:02.470 CXX test/cpp_headers/init.o 00:03:02.470 CC test/nvme/err_injection/err_injection.o 00:03:02.470 LINK nvme_dp 00:03:02.470 CC test/nvme/startup/startup.o 00:03:02.470 CXX test/cpp_headers/ioat.o 00:03:02.731 LINK startup 00:03:02.731 CXX test/cpp_headers/ioat_spec.o 00:03:02.731 CC test/nvme/reserve/reserve.o 00:03:02.731 CXX test/cpp_headers/iscsi_spec.o 00:03:02.731 LINK err_injection 00:03:02.731 LINK overhead 00:03:02.731 CC test/nvme/simple_copy/simple_copy.o 00:03:02.731 CC test/nvme/connect_stress/connect_stress.o 00:03:02.731 CXX test/cpp_headers/json.o 00:03:02.990 LINK spdk_nvme 00:03:02.990 CC test/nvme/boot_partition/boot_partition.o 00:03:02.990 LINK reserve 00:03:02.990 CC test/nvme/compliance/nvme_compliance.o 00:03:02.990 CC test/nvme/fused_ordering/fused_ordering.o 00:03:02.990 LINK simple_copy 00:03:02.990 CXX test/cpp_headers/jsonrpc.o 00:03:02.990 LINK spdk_bdev 00:03:02.990 LINK connect_stress 00:03:02.990 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:02.990 LINK boot_partition 00:03:02.990 CXX test/cpp_headers/likely.o 00:03:02.990 CC test/nvme/fdp/fdp.o 00:03:02.990 CXX test/cpp_headers/log.o 00:03:02.990 CC test/nvme/cuse/cuse.o 00:03:02.990 LINK fused_ordering 00:03:02.990 CXX test/cpp_headers/lvol.o 00:03:03.249 CXX test/cpp_headers/memory.o 00:03:03.249 LINK doorbell_aers 00:03:03.249 LINK nvme_compliance 00:03:03.249 CXX test/cpp_headers/mmio.o 00:03:03.249 CXX test/cpp_headers/nbd.o 00:03:03.249 CXX test/cpp_headers/notify.o 00:03:03.249 CXX test/cpp_headers/nvme.o 00:03:03.249 CXX test/cpp_headers/nvme_intel.o 00:03:03.249 CXX test/cpp_headers/nvme_ocssd.o 00:03:03.249 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:03.249 LINK fdp 00:03:03.249 CXX test/cpp_headers/nvme_spec.o 00:03:03.249 CXX test/cpp_headers/nvme_zns.o 00:03:03.249 CXX test/cpp_headers/nvmf_cmd.o 00:03:03.249 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:03.249 CXX test/cpp_headers/nvmf.o 00:03:03.507 CXX test/cpp_headers/nvmf_spec.o 00:03:03.507 CXX test/cpp_headers/nvmf_transport.o 00:03:03.507 CXX test/cpp_headers/opal.o 00:03:03.507 CXX test/cpp_headers/opal_spec.o 00:03:03.507 CXX test/cpp_headers/pci_ids.o 00:03:03.507 CXX test/cpp_headers/pipe.o 00:03:03.507 CXX test/cpp_headers/queue.o 00:03:03.507 CXX test/cpp_headers/reduce.o 00:03:03.507 CXX test/cpp_headers/rpc.o 00:03:03.507 CXX test/cpp_headers/scheduler.o 00:03:03.507 CXX test/cpp_headers/scsi.o 00:03:03.507 CXX test/cpp_headers/scsi_spec.o 00:03:03.507 CXX test/cpp_headers/sock.o 00:03:03.507 CXX test/cpp_headers/stdinc.o 00:03:03.507 CXX test/cpp_headers/string.o 00:03:03.507 CXX test/cpp_headers/thread.o 00:03:03.507 CXX test/cpp_headers/trace.o 00:03:03.765 CXX test/cpp_headers/trace_parser.o 00:03:03.765 CXX test/cpp_headers/tree.o 00:03:03.765 CXX test/cpp_headers/ublk.o 00:03:03.765 CXX test/cpp_headers/util.o 00:03:03.765 CXX test/cpp_headers/uuid.o 00:03:03.765 CXX test/cpp_headers/version.o 00:03:03.765 CXX test/cpp_headers/vfio_user_pci.o 00:03:03.765 CXX test/cpp_headers/vfio_user_spec.o 00:03:03.765 CXX test/cpp_headers/vhost.o 00:03:03.765 CXX test/cpp_headers/vmd.o 00:03:03.765 CXX test/cpp_headers/xor.o 00:03:03.765 CXX test/cpp_headers/zipf.o 00:03:04.023 LINK cuse 00:03:05.395 LINK esnap 00:03:05.395 ************************************ 00:03:05.395 END TEST make 00:03:05.395 ************************************ 00:03:05.396 00:03:05.396 real 0m48.774s 00:03:05.396 user 4m50.458s 00:03:05.396 sys 1m5.017s 00:03:05.396 23:36:35 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:05.396 23:36:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.396 23:36:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:05.396 23:36:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:05.396 23:36:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:05.654 23:36:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:05.654 23:36:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:05.654 23:36:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:05.654 23:36:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:05.654 23:36:36 -- scripts/common.sh@335 -- # IFS=.-: 00:03:05.654 23:36:36 -- scripts/common.sh@335 -- # read -ra ver1 00:03:05.654 23:36:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:05.654 23:36:36 -- scripts/common.sh@336 -- # read -ra ver2 00:03:05.654 23:36:36 -- scripts/common.sh@337 -- # local 'op=<' 00:03:05.654 23:36:36 -- scripts/common.sh@339 -- # ver1_l=2 00:03:05.654 23:36:36 -- scripts/common.sh@340 -- # ver2_l=1 00:03:05.654 23:36:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:05.654 23:36:36 -- scripts/common.sh@343 -- # case "$op" in 00:03:05.654 23:36:36 -- scripts/common.sh@344 -- # : 1 00:03:05.654 23:36:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:05.654 23:36:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.654 23:36:36 -- scripts/common.sh@364 -- # decimal 1 00:03:05.654 23:36:36 -- scripts/common.sh@352 -- # local d=1 00:03:05.654 23:36:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:05.654 23:36:36 -- scripts/common.sh@354 -- # echo 1 00:03:05.654 23:36:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:05.654 23:36:36 -- scripts/common.sh@365 -- # decimal 2 00:03:05.654 23:36:36 -- scripts/common.sh@352 -- # local d=2 00:03:05.654 23:36:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:05.654 23:36:36 -- scripts/common.sh@354 -- # echo 2 00:03:05.654 23:36:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:05.654 23:36:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:05.654 23:36:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:05.654 23:36:36 -- scripts/common.sh@367 -- # return 0 00:03:05.654 23:36:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:05.654 23:36:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.654 --rc genhtml_branch_coverage=1 00:03:05.654 --rc genhtml_function_coverage=1 00:03:05.654 --rc genhtml_legend=1 00:03:05.654 --rc geninfo_all_blocks=1 00:03:05.654 --rc geninfo_unexecuted_blocks=1 00:03:05.654 00:03:05.654 ' 00:03:05.654 23:36:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.654 --rc genhtml_branch_coverage=1 00:03:05.654 --rc genhtml_function_coverage=1 00:03:05.654 --rc genhtml_legend=1 00:03:05.654 --rc geninfo_all_blocks=1 00:03:05.654 --rc geninfo_unexecuted_blocks=1 00:03:05.654 00:03:05.654 ' 00:03:05.654 23:36:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.654 --rc genhtml_branch_coverage=1 00:03:05.654 --rc genhtml_function_coverage=1 00:03:05.654 --rc genhtml_legend=1 00:03:05.654 --rc geninfo_all_blocks=1 00:03:05.654 --rc geninfo_unexecuted_blocks=1 00:03:05.654 00:03:05.654 ' 00:03:05.654 23:36:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.654 --rc genhtml_branch_coverage=1 00:03:05.654 --rc genhtml_function_coverage=1 00:03:05.654 --rc genhtml_legend=1 00:03:05.654 --rc geninfo_all_blocks=1 00:03:05.654 --rc geninfo_unexecuted_blocks=1 00:03:05.654 00:03:05.654 ' 00:03:05.654 23:36:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:05.654 23:36:36 -- nvmf/common.sh@7 -- # uname -s 00:03:05.654 23:36:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.654 23:36:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.654 23:36:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.654 23:36:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.654 23:36:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.655 23:36:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.655 23:36:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.655 23:36:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.655 23:36:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.655 23:36:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.655 23:36:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf00b051-453e-4584-8b01-f6b84500e948 00:03:05.655 23:36:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=bf00b051-453e-4584-8b01-f6b84500e948 00:03:05.655 23:36:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.655 23:36:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.655 23:36:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:05.655 23:36:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:05.655 23:36:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.655 23:36:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.655 23:36:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.655 23:36:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.655 23:36:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.655 23:36:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.655 23:36:36 -- paths/export.sh@5 -- # export PATH 00:03:05.655 23:36:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.655 23:36:36 -- nvmf/common.sh@46 -- # : 0 00:03:05.655 23:36:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:05.655 23:36:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:05.655 23:36:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:05.655 23:36:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.655 23:36:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.655 23:36:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:05.655 23:36:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:05.655 23:36:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:05.655 23:36:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.655 23:36:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.655 23:36:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.655 23:36:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.655 23:36:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:05.655 23:36:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.655 23:36:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:05.655 23:36:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.655 23:36:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.655 23:36:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.655 23:36:36 -- spdk/autotest.sh@48 -- # udevadm_pid=48156 00:03:05.655 23:36:36 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:05.655 23:36:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.655 23:36:36 -- spdk/autotest.sh@54 -- # echo 48160 00:03:05.655 23:36:36 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:05.655 23:36:36 -- spdk/autotest.sh@56 -- # echo 48161 00:03:05.655 23:36:36 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:05.655 23:36:36 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:05.655 23:36:36 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:05.655 23:36:36 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:05.655 23:36:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.655 23:36:36 -- common/autotest_common.sh@10 -- # set +x 00:03:05.655 23:36:36 -- spdk/autotest.sh@70 -- # create_test_list 00:03:05.655 23:36:36 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:05.655 23:36:36 -- common/autotest_common.sh@10 -- # set +x 00:03:05.655 23:36:36 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:05.655 23:36:36 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:05.655 23:36:36 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:05.655 23:36:36 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:05.655 23:36:36 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:05.655 23:36:36 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:05.655 23:36:36 -- common/autotest_common.sh@1450 -- # uname 00:03:05.655 23:36:36 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:05.655 23:36:36 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:05.655 23:36:36 -- common/autotest_common.sh@1470 -- # uname 00:03:05.655 23:36:36 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:05.655 23:36:36 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:05.655 23:36:36 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:05.655 lcov: LCOV version 1.15 00:03:05.655 23:36:36 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:13.767 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:13.767 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:13.767 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:13.767 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:13.767 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:13.767 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:35.729 23:37:03 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:35.729 23:37:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:35.729 23:37:03 -- common/autotest_common.sh@10 -- # set +x 00:03:35.729 23:37:03 -- spdk/autotest.sh@89 -- # rm -f 00:03:35.729 23:37:03 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:35.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.729 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:35.729 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:35.729 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:35.729 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:35.729 23:37:05 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:35.729 23:37:05 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:35.729 23:37:05 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:35.729 23:37:05 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:35.729 23:37:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:35.729 23:37:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:35.729 23:37:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:35.729 23:37:05 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:35.729 23:37:05 -- spdk/autotest.sh@108 -- # grep -v p 00:03:35.729 23:37:05 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:35.729 23:37:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:35.729 23:37:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:35.729 23:37:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:35.729 23:37:05 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:35.729 23:37:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.729 No valid GPT data, bailing 00:03:35.729 23:37:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.729 23:37:05 -- scripts/common.sh@393 -- # pt= 00:03:35.729 23:37:05 -- scripts/common.sh@394 -- # return 1 00:03:35.729 23:37:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.729 1+0 records in 00:03:35.729 1+0 records out 00:03:35.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344606 s, 30.4 MB/s 00:03:35.729 23:37:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:35.729 23:37:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:35.729 23:37:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:35.729 23:37:05 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:35.729 23:37:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:35.729 No valid GPT data, bailing 00:03:35.729 23:37:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:35.729 23:37:05 -- scripts/common.sh@393 -- # pt= 00:03:35.729 23:37:05 -- scripts/common.sh@394 -- # return 1 00:03:35.729 23:37:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:35.729 1+0 records in 00:03:35.729 1+0 records out 00:03:35.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666541 s, 157 MB/s 00:03:35.729 23:37:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:35.729 23:37:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:35.729 23:37:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:35.729 23:37:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:35.729 23:37:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:35.729 No valid GPT data, bailing 00:03:35.729 23:37:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:35.729 23:37:05 -- scripts/common.sh@393 -- # pt= 00:03:35.729 23:37:05 -- scripts/common.sh@394 -- # return 1 00:03:35.729 23:37:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:35.729 1+0 records in 00:03:35.729 1+0 records out 00:03:35.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563028 s, 186 MB/s 00:03:35.729 23:37:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:35.729 23:37:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:35.730 23:37:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:35.730 23:37:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:35.730 23:37:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:35.730 No valid GPT data, bailing 00:03:35.730 23:37:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:35.730 23:37:05 -- scripts/common.sh@393 -- # pt= 00:03:35.730 23:37:05 -- scripts/common.sh@394 -- # return 1 00:03:35.730 23:37:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:35.730 1+0 records in 00:03:35.730 1+0 records out 00:03:35.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626594 s, 167 MB/s 00:03:35.730 23:37:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:35.730 23:37:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:35.730 23:37:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:35.730 23:37:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:35.730 23:37:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:35.730 No valid GPT data, bailing 00:03:35.730 23:37:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:35.730 23:37:05 -- scripts/common.sh@393 -- # pt= 00:03:35.730 23:37:05 -- scripts/common.sh@394 -- # return 1 00:03:35.730 23:37:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:35.730 1+0 records in 00:03:35.730 1+0 records out 00:03:35.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046316 s, 226 MB/s 00:03:35.730 23:37:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:35.730 23:37:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:35.730 23:37:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:35.730 23:37:05 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:35.730 23:37:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:35.730 No valid GPT data, bailing 00:03:35.730 23:37:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:35.730 23:37:05 -- scripts/common.sh@393 -- # pt= 00:03:35.730 23:37:05 -- scripts/common.sh@394 -- # return 1 00:03:35.730 23:37:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:35.730 1+0 records in 00:03:35.730 1+0 records out 00:03:35.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571388 s, 184 MB/s 00:03:35.730 23:37:05 -- spdk/autotest.sh@116 -- # sync 00:03:35.730 23:37:05 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.730 23:37:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.730 23:37:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:37.116 23:37:07 -- spdk/autotest.sh@122 -- # uname -s 00:03:37.117 23:37:07 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:37.117 23:37:07 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:37.117 23:37:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.117 23:37:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.117 ************************************ 00:03:37.117 START TEST setup.sh 00:03:37.117 ************************************ 00:03:37.117 23:37:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:37.117 * Looking for test storage... 00:03:37.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.117 23:37:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:37.117 23:37:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:37.117 23:37:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:37.117 23:37:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:37.117 23:37:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:37.117 23:37:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:37.117 23:37:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:37.117 23:37:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.117 23:37:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:37.117 23:37:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:37.117 23:37:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:37.117 23:37:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:37.117 23:37:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:37.117 23:37:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:37.117 23:37:07 -- scripts/common.sh@344 -- # : 1 00:03:37.117 23:37:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:37.117 23:37:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.117 23:37:07 -- scripts/common.sh@364 -- # decimal 1 00:03:37.117 23:37:07 -- scripts/common.sh@352 -- # local d=1 00:03:37.117 23:37:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.117 23:37:07 -- scripts/common.sh@354 -- # echo 1 00:03:37.117 23:37:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:37.117 23:37:07 -- scripts/common.sh@365 -- # decimal 2 00:03:37.117 23:37:07 -- scripts/common.sh@352 -- # local d=2 00:03:37.117 23:37:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.117 23:37:07 -- scripts/common.sh@354 -- # echo 2 00:03:37.117 23:37:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:37.117 23:37:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:37.117 23:37:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:37.117 23:37:07 -- scripts/common.sh@367 -- # return 0 00:03:37.117 23:37:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- setup/test-setup.sh@10 -- # uname -s 00:03:37.117 23:37:07 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:37.117 23:37:07 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:37.117 23:37:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.117 23:37:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.117 ************************************ 00:03:37.117 START TEST acl 00:03:37.117 ************************************ 00:03:37.117 23:37:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:37.117 * Looking for test storage... 00:03:37.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.117 23:37:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:37.117 23:37:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:37.117 23:37:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:37.117 23:37:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:37.117 23:37:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:37.117 23:37:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:37.117 23:37:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:37.117 23:37:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.117 23:37:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:37.117 23:37:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:37.117 23:37:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:37.117 23:37:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:37.117 23:37:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:37.117 23:37:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:37.117 23:37:07 -- scripts/common.sh@344 -- # : 1 00:03:37.117 23:37:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:37.117 23:37:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.117 23:37:07 -- scripts/common.sh@364 -- # decimal 1 00:03:37.117 23:37:07 -- scripts/common.sh@352 -- # local d=1 00:03:37.117 23:37:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.117 23:37:07 -- scripts/common.sh@354 -- # echo 1 00:03:37.117 23:37:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:37.117 23:37:07 -- scripts/common.sh@365 -- # decimal 2 00:03:37.117 23:37:07 -- scripts/common.sh@352 -- # local d=2 00:03:37.117 23:37:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.117 23:37:07 -- scripts/common.sh@354 -- # echo 2 00:03:37.117 23:37:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:37.117 23:37:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:37.117 23:37:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:37.117 23:37:07 -- scripts/common.sh@367 -- # return 0 00:03:37.117 23:37:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:37.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.117 --rc genhtml_branch_coverage=1 00:03:37.117 --rc genhtml_function_coverage=1 00:03:37.117 --rc genhtml_legend=1 00:03:37.117 --rc geninfo_all_blocks=1 00:03:37.117 --rc geninfo_unexecuted_blocks=1 00:03:37.117 00:03:37.117 ' 00:03:37.117 23:37:07 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:37.117 23:37:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:37.117 23:37:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:37.117 23:37:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:37.117 23:37:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:37.117 23:37:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:37.117 23:37:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:37.117 23:37:07 -- setup/acl.sh@12 -- # devs=() 00:03:37.117 23:37:07 -- setup/acl.sh@12 -- # declare -a devs 00:03:37.117 23:37:07 -- setup/acl.sh@13 -- # drivers=() 00:03:37.117 23:37:07 -- setup/acl.sh@13 -- # declare -A drivers 00:03:37.117 23:37:07 -- setup/acl.sh@51 -- # setup reset 00:03:37.117 23:37:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.117 23:37:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:38.504 23:37:08 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:38.504 23:37:08 -- setup/acl.sh@16 -- # local dev driver 00:03:38.504 23:37:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.504 23:37:08 -- setup/acl.sh@15 -- # setup output status 00:03:38.504 23:37:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.504 23:37:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:38.504 Hugepages 00:03:38.504 node hugesize free / total 00:03:38.504 23:37:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:38.504 23:37:09 -- setup/acl.sh@19 -- # continue 00:03:38.504 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.504 00:03:38.504 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.504 23:37:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:38.504 23:37:09 -- setup/acl.sh@19 -- # continue 00:03:38.504 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.504 23:37:09 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:38.504 23:37:09 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:38.504 23:37:09 -- setup/acl.sh@20 -- # continue 00:03:38.504 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.504 23:37:09 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:38.504 23:37:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.504 23:37:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:38.504 23:37:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.504 23:37:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.504 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.766 23:37:09 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:38.766 23:37:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.766 23:37:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:38.766 23:37:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.766 23:37:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.766 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.766 23:37:09 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:38.766 23:37:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.766 23:37:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:38.766 23:37:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.766 23:37:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.766 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.766 23:37:09 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:38.766 23:37:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.766 23:37:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:38.766 23:37:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.766 23:37:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.766 23:37:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.766 23:37:09 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:38.766 23:37:09 -- setup/acl.sh@54 -- # run_test denied denied 00:03:38.766 23:37:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:38.766 23:37:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:38.766 23:37:09 -- common/autotest_common.sh@10 -- # set +x 00:03:38.766 ************************************ 00:03:38.766 START TEST denied 00:03:38.766 ************************************ 00:03:38.766 23:37:09 -- common/autotest_common.sh@1114 -- # denied 00:03:38.766 23:37:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:38.766 23:37:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:38.766 23:37:09 -- setup/acl.sh@38 -- # setup output config 00:03:38.766 23:37:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.766 23:37:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.153 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:40.153 23:37:10 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:40.153 23:37:10 -- setup/acl.sh@28 -- # local dev driver 00:03:40.153 23:37:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:40.153 23:37:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:40.153 23:37:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:40.153 23:37:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:40.154 23:37:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:40.154 23:37:10 -- setup/acl.sh@41 -- # setup reset 00:03:40.154 23:37:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.154 23:37:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:46.740 00:03:46.740 real 0m7.218s 00:03:46.740 user 0m0.759s 00:03:46.740 sys 0m1.264s 00:03:46.740 23:37:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.740 23:37:16 -- common/autotest_common.sh@10 -- # set +x 00:03:46.740 ************************************ 00:03:46.740 END TEST denied 00:03:46.740 ************************************ 00:03:46.740 23:37:16 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:46.740 23:37:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.740 23:37:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.740 23:37:16 -- common/autotest_common.sh@10 -- # set +x 00:03:46.740 ************************************ 00:03:46.740 START TEST allowed 00:03:46.740 ************************************ 00:03:46.740 23:37:16 -- common/autotest_common.sh@1114 -- # allowed 00:03:46.740 23:37:16 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:46.740 23:37:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:46.740 23:37:16 -- setup/acl.sh@45 -- # setup output config 00:03:46.740 23:37:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.740 23:37:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:47.312 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.312 23:37:17 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:47.312 23:37:17 -- setup/acl.sh@28 -- # local dev driver 00:03:47.312 23:37:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.312 23:37:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:47.312 23:37:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:47.312 23:37:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.312 23:37:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.312 23:37:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.312 23:37:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:47.312 23:37:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:47.312 23:37:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.312 23:37:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.312 23:37:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.312 23:37:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:47.312 23:37:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:47.312 23:37:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.312 23:37:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.312 23:37:17 -- setup/acl.sh@48 -- # setup reset 00:03:47.312 23:37:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.312 23:37:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.699 ************************************ 00:03:48.699 END TEST allowed 00:03:48.699 ************************************ 00:03:48.699 00:03:48.699 real 0m2.328s 00:03:48.699 user 0m0.864s 00:03:48.699 sys 0m1.142s 00:03:48.699 23:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.699 23:37:19 -- common/autotest_common.sh@10 -- # set +x 00:03:48.699 00:03:48.699 real 0m11.455s 00:03:48.699 user 0m2.372s 00:03:48.699 sys 0m3.393s 00:03:48.699 23:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.699 ************************************ 00:03:48.699 END TEST acl 00:03:48.699 ************************************ 00:03:48.699 23:37:19 -- common/autotest_common.sh@10 -- # set +x 00:03:48.699 23:37:19 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:48.699 23:37:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.699 23:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.699 23:37:19 -- common/autotest_common.sh@10 -- # set +x 00:03:48.699 ************************************ 00:03:48.699 START TEST hugepages 00:03:48.699 ************************************ 00:03:48.699 23:37:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:48.699 * Looking for test storage... 00:03:48.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:48.699 23:37:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.699 23:37:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.699 23:37:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.699 23:37:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.699 23:37:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.699 23:37:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.699 23:37:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.699 23:37:19 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.699 23:37:19 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.699 23:37:19 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.699 23:37:19 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.699 23:37:19 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.699 23:37:19 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.699 23:37:19 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.699 23:37:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.699 23:37:19 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.699 23:37:19 -- scripts/common.sh@344 -- # : 1 00:03:48.699 23:37:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.699 23:37:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.699 23:37:19 -- scripts/common.sh@364 -- # decimal 1 00:03:48.699 23:37:19 -- scripts/common.sh@352 -- # local d=1 00:03:48.699 23:37:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.699 23:37:19 -- scripts/common.sh@354 -- # echo 1 00:03:48.699 23:37:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.699 23:37:19 -- scripts/common.sh@365 -- # decimal 2 00:03:48.699 23:37:19 -- scripts/common.sh@352 -- # local d=2 00:03:48.699 23:37:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.699 23:37:19 -- scripts/common.sh@354 -- # echo 2 00:03:48.699 23:37:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.699 23:37:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.699 23:37:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.699 23:37:19 -- scripts/common.sh@367 -- # return 0 00:03:48.699 23:37:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.699 23:37:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.699 --rc genhtml_branch_coverage=1 00:03:48.699 --rc genhtml_function_coverage=1 00:03:48.700 --rc genhtml_legend=1 00:03:48.700 --rc geninfo_all_blocks=1 00:03:48.700 --rc geninfo_unexecuted_blocks=1 00:03:48.700 00:03:48.700 ' 00:03:48.700 23:37:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.700 --rc genhtml_branch_coverage=1 00:03:48.700 --rc genhtml_function_coverage=1 00:03:48.700 --rc genhtml_legend=1 00:03:48.700 --rc geninfo_all_blocks=1 00:03:48.700 --rc geninfo_unexecuted_blocks=1 00:03:48.700 00:03:48.700 ' 00:03:48.700 23:37:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.700 --rc genhtml_branch_coverage=1 00:03:48.700 --rc genhtml_function_coverage=1 00:03:48.700 --rc genhtml_legend=1 00:03:48.700 --rc geninfo_all_blocks=1 00:03:48.700 --rc geninfo_unexecuted_blocks=1 00:03:48.700 00:03:48.700 ' 00:03:48.700 23:37:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.700 --rc genhtml_branch_coverage=1 00:03:48.700 --rc genhtml_function_coverage=1 00:03:48.700 --rc genhtml_legend=1 00:03:48.700 --rc geninfo_all_blocks=1 00:03:48.700 --rc geninfo_unexecuted_blocks=1 00:03:48.700 00:03:48.700 ' 00:03:48.700 23:37:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:48.700 23:37:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:48.700 23:37:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:48.700 23:37:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:48.700 23:37:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:48.700 23:37:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:48.700 23:37:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:48.700 23:37:19 -- setup/common.sh@18 -- # local node= 00:03:48.700 23:37:19 -- setup/common.sh@19 -- # local var val 00:03:48.700 23:37:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.700 23:37:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.700 23:37:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.700 23:37:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.700 23:37:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.700 23:37:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 5798724 kB' 'MemAvailable: 7355292 kB' 'Buffers: 2684 kB' 'Cached: 1769420 kB' 'SwapCached: 0 kB' 'Active: 465780 kB' 'Inactive: 1422368 kB' 'Active(anon): 126572 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117748 kB' 'Mapped: 51016 kB' 'Shmem: 10528 kB' 'KReclaimable: 63840 kB' 'Slab: 163600 kB' 'SReclaimable: 63840 kB' 'SUnreclaim: 99760 kB' 'KernelStack: 6584 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 320892 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.700 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.700 23:37:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # continue 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.701 23:37:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.701 23:37:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.701 23:37:19 -- setup/common.sh@33 -- # echo 2048 00:03:48.701 23:37:19 -- setup/common.sh@33 -- # return 0 00:03:48.701 23:37:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:48.701 23:37:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:48.701 23:37:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:48.701 23:37:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:48.701 23:37:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:48.701 23:37:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:48.701 23:37:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:48.701 23:37:19 -- setup/hugepages.sh@207 -- # get_nodes 00:03:48.701 23:37:19 -- setup/hugepages.sh@27 -- # local node 00:03:48.701 23:37:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.701 23:37:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:48.701 23:37:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.701 23:37:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.701 23:37:19 -- setup/hugepages.sh@208 -- # clear_hp 00:03:48.701 23:37:19 -- setup/hugepages.sh@37 -- # local node hp 00:03:48.701 23:37:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.701 23:37:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.701 23:37:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.701 23:37:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.701 23:37:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.701 23:37:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.701 23:37:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.701 23:37:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:48.701 23:37:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.701 23:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.701 23:37:19 -- common/autotest_common.sh@10 -- # set +x 00:03:48.701 ************************************ 00:03:48.701 START TEST default_setup 00:03:48.701 ************************************ 00:03:48.701 23:37:19 -- common/autotest_common.sh@1114 -- # default_setup 00:03:48.701 23:37:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:48.701 23:37:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.701 23:37:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.701 23:37:19 -- setup/hugepages.sh@51 -- # shift 00:03:48.701 23:37:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.701 23:37:19 -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.701 23:37:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.701 23:37:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.701 23:37:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.701 23:37:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.701 23:37:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.701 23:37:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.701 23:37:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.701 23:37:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.701 23:37:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.701 23:37:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.701 23:37:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.701 23:37:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.701 23:37:19 -- setup/hugepages.sh@73 -- # return 0 00:03:48.701 23:37:19 -- setup/hugepages.sh@137 -- # setup output 00:03:48.701 23:37:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.701 23:37:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.909 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.909 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.909 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.909 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.909 23:37:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:49.909 23:37:20 -- setup/hugepages.sh@89 -- # local node 00:03:49.909 23:37:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.909 23:37:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.909 23:37:20 -- setup/hugepages.sh@92 -- # local surp 00:03:49.909 23:37:20 -- setup/hugepages.sh@93 -- # local resv 00:03:49.909 23:37:20 -- setup/hugepages.sh@94 -- # local anon 00:03:49.909 23:37:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.909 23:37:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.909 23:37:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.909 23:37:20 -- setup/common.sh@18 -- # local node= 00:03:49.909 23:37:20 -- setup/common.sh@19 -- # local var val 00:03:49.909 23:37:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.909 23:37:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.909 23:37:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.909 23:37:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.909 23:37:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.909 23:37:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.909 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.909 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7915364 kB' 'MemAvailable: 9471708 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468612 kB' 'Inactive: 1422356 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120532 kB' 'Mapped: 50872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163364 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 99952 kB' 'KernelStack: 6592 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.910 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.910 23:37:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.911 23:37:20 -- setup/common.sh@33 -- # echo 0 00:03:49.911 23:37:20 -- setup/common.sh@33 -- # return 0 00:03:49.911 23:37:20 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.911 23:37:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.911 23:37:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.911 23:37:20 -- setup/common.sh@18 -- # local node= 00:03:49.911 23:37:20 -- setup/common.sh@19 -- # local var val 00:03:49.911 23:37:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.911 23:37:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.911 23:37:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.911 23:37:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.911 23:37:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.911 23:37:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7915112 kB' 'MemAvailable: 9471456 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468640 kB' 'Inactive: 1422356 kB' 'Active(anon): 129432 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120504 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163372 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6560 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.911 23:37:20 -- setup/common.sh@32 -- # continue 00:03:49.911 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.175 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.175 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.176 23:37:20 -- setup/common.sh@33 -- # echo 0 00:03:50.176 23:37:20 -- setup/common.sh@33 -- # return 0 00:03:50.176 23:37:20 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.176 23:37:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.176 23:37:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.176 23:37:20 -- setup/common.sh@18 -- # local node= 00:03:50.176 23:37:20 -- setup/common.sh@19 -- # local var val 00:03:50.176 23:37:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.176 23:37:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.176 23:37:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.176 23:37:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.176 23:37:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.176 23:37:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7915112 kB' 'MemAvailable: 9471456 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468524 kB' 'Inactive: 1422356 kB' 'Active(anon): 129316 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120432 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163372 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6576 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.176 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.176 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.177 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.177 23:37:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.178 23:37:20 -- setup/common.sh@33 -- # echo 0 00:03:50.178 23:37:20 -- setup/common.sh@33 -- # return 0 00:03:50.178 23:37:20 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.178 23:37:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.178 nr_hugepages=1024 00:03:50.178 resv_hugepages=0 00:03:50.178 23:37:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.178 surplus_hugepages=0 00:03:50.178 23:37:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.178 23:37:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.178 anon_hugepages=0 00:03:50.178 23:37:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.178 23:37:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.178 23:37:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.178 23:37:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.178 23:37:20 -- setup/common.sh@18 -- # local node= 00:03:50.178 23:37:20 -- setup/common.sh@19 -- # local var val 00:03:50.178 23:37:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.178 23:37:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.178 23:37:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.178 23:37:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.178 23:37:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.178 23:37:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7915112 kB' 'MemAvailable: 9471456 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468492 kB' 'Inactive: 1422356 kB' 'Active(anon): 129284 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120404 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163372 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6560 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.178 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.178 23:37:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.179 23:37:20 -- setup/common.sh@33 -- # echo 1024 00:03:50.179 23:37:20 -- setup/common.sh@33 -- # return 0 00:03:50.179 23:37:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.179 23:37:20 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.179 23:37:20 -- setup/hugepages.sh@27 -- # local node 00:03:50.179 23:37:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.179 23:37:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.179 23:37:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:50.179 23:37:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.179 23:37:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.179 23:37:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.179 23:37:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.179 23:37:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.179 23:37:20 -- setup/common.sh@18 -- # local node=0 00:03:50.179 23:37:20 -- setup/common.sh@19 -- # local var val 00:03:50.179 23:37:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.179 23:37:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.179 23:37:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.179 23:37:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.179 23:37:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.179 23:37:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7915112 kB' 'MemUsed: 4321988 kB' 'SwapCached: 0 kB' 'Active: 468300 kB' 'Inactive: 1422356 kB' 'Active(anon): 129092 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772060 kB' 'Mapped: 50692 kB' 'AnonPages: 120220 kB' 'Shmem: 10492 kB' 'KernelStack: 6592 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63412 kB' 'Slab: 163380 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 99968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.179 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.179 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # continue 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.180 23:37:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.180 23:37:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.180 23:37:20 -- setup/common.sh@33 -- # echo 0 00:03:50.180 23:37:20 -- setup/common.sh@33 -- # return 0 00:03:50.180 node0=1024 expecting 1024 00:03:50.180 23:37:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.180 23:37:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.180 23:37:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.180 23:37:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.180 23:37:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.180 23:37:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.180 ************************************ 00:03:50.180 END TEST default_setup 00:03:50.180 ************************************ 00:03:50.180 00:03:50.180 real 0m1.381s 00:03:50.180 user 0m0.537s 00:03:50.180 sys 0m0.658s 00:03:50.180 23:37:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.180 23:37:20 -- common/autotest_common.sh@10 -- # set +x 00:03:50.180 23:37:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:50.180 23:37:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.180 23:37:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.180 23:37:20 -- common/autotest_common.sh@10 -- # set +x 00:03:50.180 ************************************ 00:03:50.180 START TEST per_node_1G_alloc 00:03:50.180 ************************************ 00:03:50.180 23:37:20 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:50.180 23:37:20 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:50.180 23:37:20 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:50.180 23:37:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.180 23:37:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:50.180 23:37:20 -- setup/hugepages.sh@51 -- # shift 00:03:50.180 23:37:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:50.180 23:37:20 -- setup/hugepages.sh@52 -- # local node_ids 00:03:50.180 23:37:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.180 23:37:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.180 23:37:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:50.180 23:37:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:50.180 23:37:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.180 23:37:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.180 23:37:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:50.180 23:37:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.180 23:37:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.180 23:37:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:50.180 23:37:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.180 23:37:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.180 23:37:20 -- setup/hugepages.sh@73 -- # return 0 00:03:50.180 23:37:20 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:50.180 23:37:20 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:50.181 23:37:20 -- setup/hugepages.sh@146 -- # setup output 00:03:50.181 23:37:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.181 23:37:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.757 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.757 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.757 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.758 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.758 23:37:21 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:50.758 23:37:21 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:50.758 23:37:21 -- setup/hugepages.sh@89 -- # local node 00:03:50.758 23:37:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.758 23:37:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.758 23:37:21 -- setup/hugepages.sh@92 -- # local surp 00:03:50.758 23:37:21 -- setup/hugepages.sh@93 -- # local resv 00:03:50.758 23:37:21 -- setup/hugepages.sh@94 -- # local anon 00:03:50.758 23:37:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.758 23:37:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.758 23:37:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.758 23:37:21 -- setup/common.sh@18 -- # local node= 00:03:50.758 23:37:21 -- setup/common.sh@19 -- # local var val 00:03:50.758 23:37:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.758 23:37:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.758 23:37:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.758 23:37:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.758 23:37:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.758 23:37:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8962532 kB' 'MemAvailable: 10518880 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468292 kB' 'Inactive: 1422360 kB' 'Active(anon): 129084 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120164 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163532 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100120 kB' 'KernelStack: 6592 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.758 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.758 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.759 23:37:21 -- setup/common.sh@33 -- # echo 0 00:03:50.759 23:37:21 -- setup/common.sh@33 -- # return 0 00:03:50.759 23:37:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.759 23:37:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.759 23:37:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.759 23:37:21 -- setup/common.sh@18 -- # local node= 00:03:50.759 23:37:21 -- setup/common.sh@19 -- # local var val 00:03:50.759 23:37:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.759 23:37:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.759 23:37:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.759 23:37:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.759 23:37:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.759 23:37:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965192 kB' 'MemAvailable: 10521540 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468432 kB' 'Inactive: 1422360 kB' 'Active(anon): 129224 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120344 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163532 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100120 kB' 'KernelStack: 6608 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.759 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.759 23:37:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.760 23:37:21 -- setup/common.sh@33 -- # echo 0 00:03:50.760 23:37:21 -- setup/common.sh@33 -- # return 0 00:03:50.760 23:37:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.760 23:37:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.760 23:37:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.760 23:37:21 -- setup/common.sh@18 -- # local node= 00:03:50.760 23:37:21 -- setup/common.sh@19 -- # local var val 00:03:50.760 23:37:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.760 23:37:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.760 23:37:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.760 23:37:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.760 23:37:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.760 23:37:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965408 kB' 'MemAvailable: 10521756 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468384 kB' 'Inactive: 1422360 kB' 'Active(anon): 129176 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120264 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163532 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100120 kB' 'KernelStack: 6592 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.760 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.760 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.761 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.761 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.762 23:37:21 -- setup/common.sh@33 -- # echo 0 00:03:50.762 23:37:21 -- setup/common.sh@33 -- # return 0 00:03:50.762 nr_hugepages=512 00:03:50.762 resv_hugepages=0 00:03:50.762 surplus_hugepages=0 00:03:50.762 23:37:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.762 23:37:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:50.762 23:37:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.762 23:37:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.762 anon_hugepages=0 00:03:50.762 23:37:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.762 23:37:21 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:50.762 23:37:21 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:50.762 23:37:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.762 23:37:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.762 23:37:21 -- setup/common.sh@18 -- # local node= 00:03:50.762 23:37:21 -- setup/common.sh@19 -- # local var val 00:03:50.762 23:37:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.762 23:37:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.762 23:37:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.762 23:37:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.762 23:37:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.762 23:37:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965408 kB' 'MemAvailable: 10521756 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468256 kB' 'Inactive: 1422360 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120176 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163524 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100112 kB' 'KernelStack: 6560 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.762 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.762 23:37:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.763 23:37:21 -- setup/common.sh@33 -- # echo 512 00:03:50.763 23:37:21 -- setup/common.sh@33 -- # return 0 00:03:50.763 23:37:21 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:50.763 23:37:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.763 23:37:21 -- setup/hugepages.sh@27 -- # local node 00:03:50.763 23:37:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.763 23:37:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.763 23:37:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:50.763 23:37:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.763 23:37:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.763 23:37:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.763 23:37:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.763 23:37:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.763 23:37:21 -- setup/common.sh@18 -- # local node=0 00:03:50.763 23:37:21 -- setup/common.sh@19 -- # local var val 00:03:50.763 23:37:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.763 23:37:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.763 23:37:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.763 23:37:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.763 23:37:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.763 23:37:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965408 kB' 'MemUsed: 3271692 kB' 'SwapCached: 0 kB' 'Active: 468176 kB' 'Inactive: 1422360 kB' 'Active(anon): 128968 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772060 kB' 'Mapped: 50692 kB' 'AnonPages: 120056 kB' 'Shmem: 10492 kB' 'KernelStack: 6544 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63412 kB' 'Slab: 163524 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.763 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.763 23:37:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # continue 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.764 23:37:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.764 23:37:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.764 23:37:21 -- setup/common.sh@33 -- # echo 0 00:03:50.764 23:37:21 -- setup/common.sh@33 -- # return 0 00:03:50.764 node0=512 expecting 512 00:03:50.764 ************************************ 00:03:50.764 END TEST per_node_1G_alloc 00:03:50.764 ************************************ 00:03:50.764 23:37:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.764 23:37:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.764 23:37:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.764 23:37:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.764 23:37:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.764 23:37:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.764 00:03:50.764 real 0m0.652s 00:03:50.764 user 0m0.253s 00:03:50.764 sys 0m0.409s 00:03:50.764 23:37:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.764 23:37:21 -- common/autotest_common.sh@10 -- # set +x 00:03:51.026 23:37:21 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:51.026 23:37:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.026 23:37:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.026 23:37:21 -- common/autotest_common.sh@10 -- # set +x 00:03:51.026 ************************************ 00:03:51.026 START TEST even_2G_alloc 00:03:51.026 ************************************ 00:03:51.026 23:37:21 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:51.026 23:37:21 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:51.026 23:37:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.026 23:37:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.026 23:37:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.026 23:37:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.026 23:37:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.026 23:37:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.026 23:37:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.026 23:37:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.026 23:37:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:51.026 23:37:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.026 23:37:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.026 23:37:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.026 23:37:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:51.026 23:37:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.026 23:37:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:51.026 23:37:21 -- setup/hugepages.sh@83 -- # : 0 00:03:51.026 23:37:21 -- setup/hugepages.sh@84 -- # : 0 00:03:51.026 23:37:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.026 23:37:21 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:51.026 23:37:21 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:51.026 23:37:21 -- setup/hugepages.sh@153 -- # setup output 00:03:51.026 23:37:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.026 23:37:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.554 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.554 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.554 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.554 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.554 23:37:22 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:51.554 23:37:22 -- setup/hugepages.sh@89 -- # local node 00:03:51.554 23:37:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.554 23:37:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.554 23:37:22 -- setup/hugepages.sh@92 -- # local surp 00:03:51.554 23:37:22 -- setup/hugepages.sh@93 -- # local resv 00:03:51.554 23:37:22 -- setup/hugepages.sh@94 -- # local anon 00:03:51.554 23:37:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.554 23:37:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.554 23:37:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.554 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:51.554 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:51.554 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.554 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.554 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.554 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.554 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.554 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7919232 kB' 'MemAvailable: 9475580 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468932 kB' 'Inactive: 1422360 kB' 'Active(anon): 129724 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120872 kB' 'Mapped: 50888 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163632 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100220 kB' 'KernelStack: 6608 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.554 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.554 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.555 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:51.555 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:51.555 23:37:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.555 23:37:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.555 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.555 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:51.555 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:51.555 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.555 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.555 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.555 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.555 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.555 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7919484 kB' 'MemAvailable: 9475832 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468632 kB' 'Inactive: 1422360 kB' 'Active(anon): 129424 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120540 kB' 'Mapped: 50856 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163636 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100224 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.555 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.555 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.556 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.556 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:51.556 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:51.556 23:37:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.556 23:37:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.556 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.556 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:51.556 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:51.556 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.556 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.556 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.556 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.556 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.556 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.556 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7919744 kB' 'MemAvailable: 9476092 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468232 kB' 'Inactive: 1422360 kB' 'Active(anon): 129024 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120192 kB' 'Mapped: 50676 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163652 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100240 kB' 'KernelStack: 6580 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.557 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.557 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.558 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:51.558 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:51.558 23:37:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.558 23:37:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.558 nr_hugepages=1024 00:03:51.558 23:37:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.558 resv_hugepages=0 00:03:51.558 23:37:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.558 surplus_hugepages=0 00:03:51.558 anon_hugepages=0 00:03:51.558 23:37:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.558 23:37:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.558 23:37:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.558 23:37:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.558 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.558 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:51.558 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:51.558 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.558 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.558 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.558 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.558 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.558 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7919916 kB' 'MemAvailable: 9476264 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468412 kB' 'Inactive: 1422360 kB' 'Active(anon): 129204 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120396 kB' 'Mapped: 50624 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163608 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100196 kB' 'KernelStack: 6596 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.558 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.558 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.559 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.559 23:37:22 -- setup/common.sh@33 -- # echo 1024 00:03:51.559 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:51.559 23:37:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.559 23:37:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.559 23:37:22 -- setup/hugepages.sh@27 -- # local node 00:03:51.559 23:37:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.559 23:37:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.559 23:37:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:51.559 23:37:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.559 23:37:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.559 23:37:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.559 23:37:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.559 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.559 23:37:22 -- setup/common.sh@18 -- # local node=0 00:03:51.559 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:51.559 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.559 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.559 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.559 23:37:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.559 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.559 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.559 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7919916 kB' 'MemUsed: 4317184 kB' 'SwapCached: 0 kB' 'Active: 468276 kB' 'Inactive: 1422360 kB' 'Active(anon): 129068 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772060 kB' 'Mapped: 50624 kB' 'AnonPages: 120236 kB' 'Shmem: 10492 kB' 'KernelStack: 6548 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63412 kB' 'Slab: 163600 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # continue 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.560 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.560 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.560 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:51.560 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:51.560 23:37:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.560 23:37:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.560 23:37:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.560 23:37:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.560 23:37:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.560 node0=1024 expecting 1024 00:03:51.560 ************************************ 00:03:51.560 END TEST even_2G_alloc 00:03:51.560 ************************************ 00:03:51.560 23:37:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.560 00:03:51.560 real 0m0.676s 00:03:51.560 user 0m0.272s 00:03:51.560 sys 0m0.407s 00:03:51.561 23:37:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.561 23:37:22 -- common/autotest_common.sh@10 -- # set +x 00:03:51.561 23:37:22 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:51.561 23:37:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.561 23:37:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.561 23:37:22 -- common/autotest_common.sh@10 -- # set +x 00:03:51.561 ************************************ 00:03:51.561 START TEST odd_alloc 00:03:51.561 ************************************ 00:03:51.822 23:37:22 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:51.822 23:37:22 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:51.822 23:37:22 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:51.822 23:37:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.822 23:37:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.822 23:37:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:51.822 23:37:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.822 23:37:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.822 23:37:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.822 23:37:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:51.822 23:37:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:51.822 23:37:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.822 23:37:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.822 23:37:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.822 23:37:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:51.822 23:37:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.822 23:37:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:51.822 23:37:22 -- setup/hugepages.sh@83 -- # : 0 00:03:51.822 23:37:22 -- setup/hugepages.sh@84 -- # : 0 00:03:51.822 23:37:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.822 23:37:22 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:51.822 23:37:22 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:51.822 23:37:22 -- setup/hugepages.sh@160 -- # setup output 00:03:51.822 23:37:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.822 23:37:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.084 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.084 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.084 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.084 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.349 23:37:22 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:52.350 23:37:22 -- setup/hugepages.sh@89 -- # local node 00:03:52.350 23:37:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.350 23:37:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.350 23:37:22 -- setup/hugepages.sh@92 -- # local surp 00:03:52.350 23:37:22 -- setup/hugepages.sh@93 -- # local resv 00:03:52.350 23:37:22 -- setup/hugepages.sh@94 -- # local anon 00:03:52.350 23:37:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.350 23:37:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.350 23:37:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.350 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:52.350 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:52.350 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.350 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.350 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.350 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.350 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.350 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7918460 kB' 'MemAvailable: 9474808 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468404 kB' 'Inactive: 1422360 kB' 'Active(anon): 129196 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120240 kB' 'Mapped: 50776 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163572 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100160 kB' 'KernelStack: 6560 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:52.351 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:52.351 23:37:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.351 23:37:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.351 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.351 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:52.351 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:52.351 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.351 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.351 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.351 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.351 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.351 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7918208 kB' 'MemAvailable: 9474556 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468372 kB' 'Inactive: 1422360 kB' 'Active(anon): 129164 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120208 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163620 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100208 kB' 'KernelStack: 6592 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:52.352 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:52.352 23:37:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.352 23:37:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.352 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.352 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:52.352 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:52.352 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.352 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.352 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.352 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.352 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.352 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7918460 kB' 'MemAvailable: 9474808 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468380 kB' 'Inactive: 1422360 kB' 'Active(anon): 129172 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120228 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163616 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100204 kB' 'KernelStack: 6560 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:52.354 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:52.354 nr_hugepages=1025 00:03:52.354 resv_hugepages=0 00:03:52.354 surplus_hugepages=0 00:03:52.354 anon_hugepages=0 00:03:52.354 23:37:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.354 23:37:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:52.354 23:37:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.354 23:37:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.354 23:37:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.354 23:37:22 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.354 23:37:22 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:52.354 23:37:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.354 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.354 23:37:22 -- setup/common.sh@18 -- # local node= 00:03:52.354 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:52.354 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.354 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.354 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.354 23:37:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.354 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.354 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7918460 kB' 'MemAvailable: 9474808 kB' 'Buffers: 2684 kB' 'Cached: 1769376 kB' 'SwapCached: 0 kB' 'Active: 468088 kB' 'Inactive: 1422360 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119928 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163616 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100204 kB' 'KernelStack: 6544 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 23:37:22 -- setup/common.sh@33 -- # echo 1025 00:03:52.355 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:52.355 23:37:22 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.355 23:37:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.355 23:37:22 -- setup/hugepages.sh@27 -- # local node 00:03:52.355 23:37:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.355 23:37:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:52.355 23:37:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:52.355 23:37:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.355 23:37:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.355 23:37:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.355 23:37:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.355 23:37:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.355 23:37:22 -- setup/common.sh@18 -- # local node=0 00:03:52.355 23:37:22 -- setup/common.sh@19 -- # local var val 00:03:52.355 23:37:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.355 23:37:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.355 23:37:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.355 23:37:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.355 23:37:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.355 23:37:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7918460 kB' 'MemUsed: 4318640 kB' 'SwapCached: 0 kB' 'Active: 468348 kB' 'Inactive: 1422360 kB' 'Active(anon): 129140 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772060 kB' 'Mapped: 50692 kB' 'AnonPages: 120188 kB' 'Shmem: 10492 kB' 'KernelStack: 6612 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63412 kB' 'Slab: 163616 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 23:37:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # continue 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 23:37:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 23:37:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.356 23:37:22 -- setup/common.sh@33 -- # echo 0 00:03:52.356 23:37:22 -- setup/common.sh@33 -- # return 0 00:03:52.356 23:37:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.356 23:37:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.356 23:37:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.356 23:37:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.356 node0=1025 expecting 1025 00:03:52.356 23:37:22 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:52.356 23:37:22 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:52.356 00:03:52.356 real 0m0.662s 00:03:52.356 user 0m0.266s 00:03:52.356 sys 0m0.401s 00:03:52.356 23:37:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.356 23:37:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.356 ************************************ 00:03:52.356 END TEST odd_alloc 00:03:52.356 ************************************ 00:03:52.356 23:37:22 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:52.356 23:37:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.356 23:37:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.356 23:37:23 -- common/autotest_common.sh@10 -- # set +x 00:03:52.356 ************************************ 00:03:52.356 START TEST custom_alloc 00:03:52.356 ************************************ 00:03:52.356 23:37:23 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:52.356 23:37:23 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:52.356 23:37:23 -- setup/hugepages.sh@169 -- # local node 00:03:52.356 23:37:23 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:52.356 23:37:23 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:52.356 23:37:23 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:52.356 23:37:23 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:52.356 23:37:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.356 23:37:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.356 23:37:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.356 23:37:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.356 23:37:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.356 23:37:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.356 23:37:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:52.356 23:37:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.356 23:37:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.356 23:37:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.356 23:37:23 -- setup/hugepages.sh@83 -- # : 0 00:03:52.356 23:37:23 -- setup/hugepages.sh@84 -- # : 0 00:03:52.356 23:37:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:52.356 23:37:23 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:52.356 23:37:23 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.356 23:37:23 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.356 23:37:23 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:52.356 23:37:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.356 23:37:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.356 23:37:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.356 23:37:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:52.356 23:37:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.356 23:37:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.356 23:37:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.357 23:37:23 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:52.357 23:37:23 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.357 23:37:23 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:52.357 23:37:23 -- setup/hugepages.sh@78 -- # return 0 00:03:52.357 23:37:23 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:52.357 23:37:23 -- setup/hugepages.sh@187 -- # setup output 00:03:52.357 23:37:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.357 23:37:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.936 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.936 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.936 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.936 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.936 23:37:23 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:52.936 23:37:23 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:52.936 23:37:23 -- setup/hugepages.sh@89 -- # local node 00:03:52.936 23:37:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.936 23:37:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.936 23:37:23 -- setup/hugepages.sh@92 -- # local surp 00:03:52.936 23:37:23 -- setup/hugepages.sh@93 -- # local resv 00:03:52.936 23:37:23 -- setup/hugepages.sh@94 -- # local anon 00:03:52.936 23:37:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.936 23:37:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.936 23:37:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.936 23:37:23 -- setup/common.sh@18 -- # local node= 00:03:52.936 23:37:23 -- setup/common.sh@19 -- # local var val 00:03:52.936 23:37:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.936 23:37:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.936 23:37:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.936 23:37:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.936 23:37:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.936 23:37:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.936 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.936 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.936 23:37:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965064 kB' 'MemAvailable: 10521416 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 468668 kB' 'Inactive: 1422364 kB' 'Active(anon): 129460 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 50756 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163508 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100096 kB' 'KernelStack: 6568 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.936 23:37:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.936 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.936 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.936 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.936 23:37:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.936 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.936 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.936 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.936 23:37:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.937 23:37:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.937 23:37:23 -- setup/common.sh@33 -- # echo 0 00:03:52.937 23:37:23 -- setup/common.sh@33 -- # return 0 00:03:52.937 23:37:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.937 23:37:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.937 23:37:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.937 23:37:23 -- setup/common.sh@18 -- # local node= 00:03:52.937 23:37:23 -- setup/common.sh@19 -- # local var val 00:03:52.937 23:37:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.937 23:37:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.937 23:37:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.937 23:37:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.937 23:37:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.937 23:37:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.937 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965064 kB' 'MemAvailable: 10521416 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 468292 kB' 'Inactive: 1422364 kB' 'Active(anon): 129084 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120232 kB' 'Mapped: 50756 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163516 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100104 kB' 'KernelStack: 6600 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.938 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.938 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.939 23:37:23 -- setup/common.sh@33 -- # echo 0 00:03:52.939 23:37:23 -- setup/common.sh@33 -- # return 0 00:03:52.939 23:37:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.939 23:37:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.939 23:37:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.939 23:37:23 -- setup/common.sh@18 -- # local node= 00:03:52.939 23:37:23 -- setup/common.sh@19 -- # local var val 00:03:52.939 23:37:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.939 23:37:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.939 23:37:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.939 23:37:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.939 23:37:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.939 23:37:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965064 kB' 'MemAvailable: 10521416 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 468304 kB' 'Inactive: 1422364 kB' 'Active(anon): 129096 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120264 kB' 'Mapped: 50752 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163512 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100100 kB' 'KernelStack: 6588 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.939 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.939 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.940 23:37:23 -- setup/common.sh@33 -- # echo 0 00:03:52.940 23:37:23 -- setup/common.sh@33 -- # return 0 00:03:52.940 nr_hugepages=512 00:03:52.940 resv_hugepages=0 00:03:52.940 surplus_hugepages=0 00:03:52.940 anon_hugepages=0 00:03:52.940 23:37:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.940 23:37:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:52.940 23:37:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.940 23:37:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.940 23:37:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.940 23:37:23 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:52.940 23:37:23 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:52.940 23:37:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.940 23:37:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.940 23:37:23 -- setup/common.sh@18 -- # local node= 00:03:52.940 23:37:23 -- setup/common.sh@19 -- # local var val 00:03:52.940 23:37:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.940 23:37:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.940 23:37:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.940 23:37:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.940 23:37:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.940 23:37:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965064 kB' 'MemAvailable: 10521416 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 468524 kB' 'Inactive: 1422364 kB' 'Active(anon): 129316 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120448 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63412 kB' 'Slab: 163516 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100104 kB' 'KernelStack: 6576 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 333988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.940 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.940 23:37:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.941 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.941 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # continue 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.942 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.942 23:37:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.226 23:37:23 -- setup/common.sh@33 -- # echo 512 00:03:53.226 23:37:23 -- setup/common.sh@33 -- # return 0 00:03:53.226 23:37:23 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:53.226 23:37:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.226 23:37:23 -- setup/hugepages.sh@27 -- # local node 00:03:53.226 23:37:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.226 23:37:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.226 23:37:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.226 23:37:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.226 23:37:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.226 23:37:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.226 23:37:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.226 23:37:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.226 23:37:23 -- setup/common.sh@18 -- # local node=0 00:03:53.226 23:37:23 -- setup/common.sh@19 -- # local var val 00:03:53.226 23:37:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.226 23:37:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.226 23:37:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.226 23:37:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.226 23:37:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.226 23:37:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8965064 kB' 'MemUsed: 3272036 kB' 'SwapCached: 0 kB' 'Active: 468276 kB' 'Inactive: 1422364 kB' 'Active(anon): 129068 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772064 kB' 'Mapped: 50692 kB' 'AnonPages: 120200 kB' 'Shmem: 10492 kB' 'KernelStack: 6576 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63412 kB' 'Slab: 163512 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 100100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.226 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.226 23:37:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # continue 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.227 23:37:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.227 23:37:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.227 23:37:23 -- setup/common.sh@33 -- # echo 0 00:03:53.227 23:37:23 -- setup/common.sh@33 -- # return 0 00:03:53.227 23:37:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.227 23:37:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.227 23:37:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.227 23:37:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.227 23:37:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.227 node0=512 expecting 512 00:03:53.227 23:37:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.227 00:03:53.227 real 0m0.660s 00:03:53.227 user 0m0.277s 00:03:53.227 sys 0m0.394s 00:03:53.227 23:37:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:53.227 23:37:23 -- common/autotest_common.sh@10 -- # set +x 00:03:53.227 ************************************ 00:03:53.227 END TEST custom_alloc 00:03:53.227 ************************************ 00:03:53.227 23:37:23 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:53.227 23:37:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.227 23:37:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.227 23:37:23 -- common/autotest_common.sh@10 -- # set +x 00:03:53.227 ************************************ 00:03:53.227 START TEST no_shrink_alloc 00:03:53.227 ************************************ 00:03:53.227 23:37:23 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:53.227 23:37:23 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:53.227 23:37:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.227 23:37:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.227 23:37:23 -- setup/hugepages.sh@51 -- # shift 00:03:53.227 23:37:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.227 23:37:23 -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.227 23:37:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.227 23:37:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.227 23:37:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.227 23:37:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.227 23:37:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.227 23:37:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.227 23:37:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.227 23:37:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.227 23:37:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.227 23:37:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.227 23:37:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.227 23:37:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.227 23:37:23 -- setup/hugepages.sh@73 -- # return 0 00:03:53.227 23:37:23 -- setup/hugepages.sh@198 -- # setup output 00:03:53.227 23:37:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.227 23:37:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.804 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.804 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.804 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.804 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.804 23:37:24 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:53.804 23:37:24 -- setup/hugepages.sh@89 -- # local node 00:03:53.804 23:37:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.804 23:37:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.804 23:37:24 -- setup/hugepages.sh@92 -- # local surp 00:03:53.804 23:37:24 -- setup/hugepages.sh@93 -- # local resv 00:03:53.804 23:37:24 -- setup/hugepages.sh@94 -- # local anon 00:03:53.804 23:37:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.804 23:37:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.804 23:37:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.804 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:53.804 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:53.804 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.804 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.804 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.804 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.804 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.804 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.804 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.804 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7927440 kB' 'MemAvailable: 9483784 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 466132 kB' 'Inactive: 1422364 kB' 'Active(anon): 126924 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118100 kB' 'Mapped: 49896 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163024 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99628 kB' 'KernelStack: 6588 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:53.804 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.804 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.804 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.804 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.804 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.805 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.805 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.805 23:37:24 -- setup/common.sh@33 -- # echo 0 00:03:53.805 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:53.805 23:37:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:53.805 23:37:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.805 23:37:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.805 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:53.805 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:53.805 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.805 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.805 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.805 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.805 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.806 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7927440 kB' 'MemAvailable: 9483784 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 465664 kB' 'Inactive: 1422364 kB' 'Active(anon): 126456 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117564 kB' 'Mapped: 49844 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163024 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99628 kB' 'KernelStack: 6480 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.806 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.806 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.807 23:37:24 -- setup/common.sh@33 -- # echo 0 00:03:53.807 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:53.807 23:37:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:53.807 23:37:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.807 23:37:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.807 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:53.807 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:53.807 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.807 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.807 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.807 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.807 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.807 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7927440 kB' 'MemAvailable: 9483784 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 465680 kB' 'Inactive: 1422364 kB' 'Active(anon): 126472 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117600 kB' 'Mapped: 49844 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163024 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99628 kB' 'KernelStack: 6496 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.807 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.807 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.808 23:37:24 -- setup/common.sh@33 -- # echo 0 00:03:53.808 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:53.808 23:37:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:53.808 nr_hugepages=1024 00:03:53.808 resv_hugepages=0 00:03:53.808 surplus_hugepages=0 00:03:53.808 anon_hugepages=0 00:03:53.808 23:37:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.808 23:37:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.808 23:37:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.808 23:37:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.808 23:37:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.808 23:37:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.808 23:37:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.808 23:37:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.808 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:53.808 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:53.808 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.808 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.808 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.808 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.808 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.808 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.808 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.808 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7929204 kB' 'MemAvailable: 9485548 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 465692 kB' 'Inactive: 1422364 kB' 'Active(anon): 126484 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117572 kB' 'Mapped: 49844 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163024 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99628 kB' 'KernelStack: 6480 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:53.808 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.809 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.809 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.810 23:37:24 -- setup/common.sh@33 -- # echo 1024 00:03:53.810 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:53.810 23:37:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.810 23:37:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.810 23:37:24 -- setup/hugepages.sh@27 -- # local node 00:03:53.810 23:37:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.810 23:37:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.810 23:37:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.810 23:37:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.810 23:37:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.810 23:37:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.810 23:37:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.810 23:37:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.810 23:37:24 -- setup/common.sh@18 -- # local node=0 00:03:53.810 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:53.810 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.810 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.810 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.810 23:37:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.810 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.810 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7929984 kB' 'MemUsed: 4307116 kB' 'SwapCached: 0 kB' 'Active: 465624 kB' 'Inactive: 1422364 kB' 'Active(anon): 126416 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772064 kB' 'Mapped: 49844 kB' 'AnonPages: 117532 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63396 kB' 'Slab: 163024 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # continue 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 23:37:24 -- setup/common.sh@33 -- # echo 0 00:03:53.811 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:53.811 node0=1024 expecting 1024 00:03:53.811 23:37:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.811 23:37:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.811 23:37:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.811 23:37:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.811 23:37:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.811 23:37:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.811 23:37:24 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:53.811 23:37:24 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:53.811 23:37:24 -- setup/hugepages.sh@202 -- # setup output 00:03:53.811 23:37:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.811 23:37:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.388 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.388 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.388 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.388 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.388 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:54.388 23:37:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:54.388 23:37:24 -- setup/hugepages.sh@89 -- # local node 00:03:54.388 23:37:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.388 23:37:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.388 23:37:24 -- setup/hugepages.sh@92 -- # local surp 00:03:54.388 23:37:24 -- setup/hugepages.sh@93 -- # local resv 00:03:54.388 23:37:24 -- setup/hugepages.sh@94 -- # local anon 00:03:54.388 23:37:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.388 23:37:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.388 23:37:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.388 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:54.388 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:54.388 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.388 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.388 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.388 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.388 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.388 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7927888 kB' 'MemAvailable: 9484232 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 466204 kB' 'Inactive: 1422364 kB' 'Active(anon): 126996 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118096 kB' 'Mapped: 49948 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163064 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99668 kB' 'KernelStack: 6572 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.388 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.388 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.389 23:37:24 -- setup/common.sh@33 -- # echo 0 00:03:54.389 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:54.389 23:37:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:54.389 23:37:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.389 23:37:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.389 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:54.389 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:54.389 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.389 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.389 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.389 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.389 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.389 23:37:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7928600 kB' 'MemAvailable: 9484944 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 466020 kB' 'Inactive: 1422364 kB' 'Active(anon): 126812 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117876 kB' 'Mapped: 49948 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163072 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99676 kB' 'KernelStack: 6464 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.389 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.389 23:37:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.390 23:37:24 -- setup/common.sh@33 -- # echo 0 00:03:54.390 23:37:24 -- setup/common.sh@33 -- # return 0 00:03:54.390 23:37:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:54.390 23:37:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.390 23:37:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.390 23:37:24 -- setup/common.sh@18 -- # local node= 00:03:54.390 23:37:24 -- setup/common.sh@19 -- # local var val 00:03:54.390 23:37:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.390 23:37:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.390 23:37:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.390 23:37:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.390 23:37:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.390 23:37:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.390 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7928836 kB' 'MemAvailable: 9485180 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 465800 kB' 'Inactive: 1422364 kB' 'Active(anon): 126592 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117640 kB' 'Mapped: 49792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163072 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99676 kB' 'KernelStack: 6480 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:54.390 23:37:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.390 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.390 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.390 23:37:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.390 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.390 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.391 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.391 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.392 23:37:25 -- setup/common.sh@33 -- # echo 0 00:03:54.392 23:37:25 -- setup/common.sh@33 -- # return 0 00:03:54.392 nr_hugepages=1024 00:03:54.392 resv_hugepages=0 00:03:54.392 surplus_hugepages=0 00:03:54.392 anon_hugepages=0 00:03:54.392 23:37:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:54.392 23:37:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.392 23:37:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.392 23:37:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.392 23:37:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.392 23:37:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.392 23:37:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.392 23:37:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.392 23:37:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.392 23:37:25 -- setup/common.sh@18 -- # local node= 00:03:54.392 23:37:25 -- setup/common.sh@19 -- # local var val 00:03:54.392 23:37:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.392 23:37:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.392 23:37:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.392 23:37:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.392 23:37:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.392 23:37:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7928836 kB' 'MemAvailable: 9485180 kB' 'Buffers: 2684 kB' 'Cached: 1769380 kB' 'SwapCached: 0 kB' 'Active: 465716 kB' 'Inactive: 1422364 kB' 'Active(anon): 126508 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117552 kB' 'Mapped: 49792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63396 kB' 'Slab: 163072 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99676 kB' 'KernelStack: 6448 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 5023744 kB' 'DirectMap1G: 9437184 kB' 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.392 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.392 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.393 23:37:25 -- setup/common.sh@33 -- # echo 1024 00:03:54.393 23:37:25 -- setup/common.sh@33 -- # return 0 00:03:54.393 23:37:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.393 23:37:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.393 23:37:25 -- setup/hugepages.sh@27 -- # local node 00:03:54.393 23:37:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.393 23:37:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.393 23:37:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:54.393 23:37:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.393 23:37:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.393 23:37:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.393 23:37:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.393 23:37:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.393 23:37:25 -- setup/common.sh@18 -- # local node=0 00:03:54.393 23:37:25 -- setup/common.sh@19 -- # local var val 00:03:54.393 23:37:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.393 23:37:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.393 23:37:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.393 23:37:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.393 23:37:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.393 23:37:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.393 23:37:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7928836 kB' 'MemUsed: 4308264 kB' 'SwapCached: 0 kB' 'Active: 465660 kB' 'Inactive: 1422364 kB' 'Active(anon): 126452 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772064 kB' 'Mapped: 49792 kB' 'AnonPages: 117512 kB' 'Shmem: 10492 kB' 'KernelStack: 6432 kB' 'PageTables: 3516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63396 kB' 'Slab: 163068 kB' 'SReclaimable: 63396 kB' 'SUnreclaim: 99672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.393 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.393 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # continue 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.394 23:37:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.394 23:37:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.394 23:37:25 -- setup/common.sh@33 -- # echo 0 00:03:54.394 23:37:25 -- setup/common.sh@33 -- # return 0 00:03:54.394 23:37:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.394 23:37:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.394 23:37:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.394 23:37:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.394 23:37:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.394 node0=1024 expecting 1024 00:03:54.394 23:37:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.394 ************************************ 00:03:54.394 END TEST no_shrink_alloc 00:03:54.394 ************************************ 00:03:54.394 00:03:54.394 real 0m1.313s 00:03:54.394 user 0m0.542s 00:03:54.394 sys 0m0.784s 00:03:54.394 23:37:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:54.394 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:03:54.656 23:37:25 -- setup/hugepages.sh@217 -- # clear_hp 00:03:54.656 23:37:25 -- setup/hugepages.sh@37 -- # local node hp 00:03:54.656 23:37:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.656 23:37:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.656 23:37:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.656 23:37:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.656 23:37:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.656 23:37:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.656 23:37:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.656 ************************************ 00:03:54.656 END TEST hugepages 00:03:54.656 ************************************ 00:03:54.656 00:03:54.656 real 0m5.984s 00:03:54.656 user 0m2.339s 00:03:54.656 sys 0m3.346s 00:03:54.656 23:37:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:54.656 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:03:54.656 23:37:25 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:54.656 23:37:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.656 23:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.656 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:03:54.656 ************************************ 00:03:54.656 START TEST driver 00:03:54.656 ************************************ 00:03:54.656 23:37:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:54.656 * Looking for test storage... 00:03:54.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:54.656 23:37:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:54.656 23:37:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:54.656 23:37:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:54.656 23:37:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:54.656 23:37:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:54.656 23:37:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:54.656 23:37:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:54.656 23:37:25 -- scripts/common.sh@335 -- # IFS=.-: 00:03:54.656 23:37:25 -- scripts/common.sh@335 -- # read -ra ver1 00:03:54.656 23:37:25 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.656 23:37:25 -- scripts/common.sh@336 -- # read -ra ver2 00:03:54.656 23:37:25 -- scripts/common.sh@337 -- # local 'op=<' 00:03:54.656 23:37:25 -- scripts/common.sh@339 -- # ver1_l=2 00:03:54.656 23:37:25 -- scripts/common.sh@340 -- # ver2_l=1 00:03:54.656 23:37:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:54.656 23:37:25 -- scripts/common.sh@343 -- # case "$op" in 00:03:54.656 23:37:25 -- scripts/common.sh@344 -- # : 1 00:03:54.656 23:37:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:54.656 23:37:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.656 23:37:25 -- scripts/common.sh@364 -- # decimal 1 00:03:54.656 23:37:25 -- scripts/common.sh@352 -- # local d=1 00:03:54.656 23:37:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.656 23:37:25 -- scripts/common.sh@354 -- # echo 1 00:03:54.656 23:37:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:54.656 23:37:25 -- scripts/common.sh@365 -- # decimal 2 00:03:54.656 23:37:25 -- scripts/common.sh@352 -- # local d=2 00:03:54.656 23:37:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.656 23:37:25 -- scripts/common.sh@354 -- # echo 2 00:03:54.656 23:37:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:54.656 23:37:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:54.656 23:37:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:54.656 23:37:25 -- scripts/common.sh@367 -- # return 0 00:03:54.657 23:37:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.657 23:37:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:54.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.657 --rc genhtml_branch_coverage=1 00:03:54.657 --rc genhtml_function_coverage=1 00:03:54.657 --rc genhtml_legend=1 00:03:54.657 --rc geninfo_all_blocks=1 00:03:54.657 --rc geninfo_unexecuted_blocks=1 00:03:54.657 00:03:54.657 ' 00:03:54.657 23:37:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:54.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.657 --rc genhtml_branch_coverage=1 00:03:54.657 --rc genhtml_function_coverage=1 00:03:54.657 --rc genhtml_legend=1 00:03:54.657 --rc geninfo_all_blocks=1 00:03:54.657 --rc geninfo_unexecuted_blocks=1 00:03:54.657 00:03:54.657 ' 00:03:54.657 23:37:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:54.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.657 --rc genhtml_branch_coverage=1 00:03:54.657 --rc genhtml_function_coverage=1 00:03:54.657 --rc genhtml_legend=1 00:03:54.657 --rc geninfo_all_blocks=1 00:03:54.657 --rc geninfo_unexecuted_blocks=1 00:03:54.657 00:03:54.657 ' 00:03:54.657 23:37:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:54.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.657 --rc genhtml_branch_coverage=1 00:03:54.657 --rc genhtml_function_coverage=1 00:03:54.657 --rc genhtml_legend=1 00:03:54.657 --rc geninfo_all_blocks=1 00:03:54.657 --rc geninfo_unexecuted_blocks=1 00:03:54.657 00:03:54.657 ' 00:03:54.657 23:37:25 -- setup/driver.sh@68 -- # setup reset 00:03:54.657 23:37:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.657 23:37:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.249 23:37:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:01.249 23:37:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.249 23:37:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.249 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.249 ************************************ 00:04:01.249 START TEST guess_driver 00:04:01.249 ************************************ 00:04:01.249 23:37:31 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:01.249 23:37:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:01.249 23:37:31 -- setup/driver.sh@47 -- # local fail=0 00:04:01.249 23:37:31 -- setup/driver.sh@49 -- # pick_driver 00:04:01.249 23:37:31 -- setup/driver.sh@36 -- # vfio 00:04:01.249 23:37:31 -- setup/driver.sh@21 -- # local iommu_grups 00:04:01.249 23:37:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:01.249 23:37:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:01.249 23:37:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:01.249 23:37:31 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:01.249 23:37:31 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:01.249 23:37:31 -- setup/driver.sh@32 -- # return 1 00:04:01.249 23:37:31 -- setup/driver.sh@38 -- # uio 00:04:01.249 23:37:31 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:01.249 23:37:31 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:01.249 23:37:31 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:01.249 23:37:31 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:01.249 23:37:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:01.249 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:01.249 23:37:31 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:01.249 Looking for driver=uio_pci_generic 00:04:01.249 23:37:31 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:01.249 23:37:31 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:01.249 23:37:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:01.249 23:37:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.249 23:37:31 -- setup/driver.sh@45 -- # setup output config 00:04:01.249 23:37:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.249 23:37:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.879 23:37:32 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:01.879 23:37:32 -- setup/driver.sh@58 -- # continue 00:04:01.879 23:37:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.879 23:37:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.879 23:37:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:01.879 23:37:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.879 23:37:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.879 23:37:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:01.879 23:37:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.879 23:37:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.879 23:37:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:01.879 23:37:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.879 23:37:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.879 23:37:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:01.879 23:37:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.156 23:37:32 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:02.156 23:37:32 -- setup/driver.sh@65 -- # setup reset 00:04:02.156 23:37:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.156 23:37:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.746 00:04:08.746 real 0m7.247s 00:04:08.746 user 0m0.716s 00:04:08.746 sys 0m1.447s 00:04:08.746 23:37:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.746 ************************************ 00:04:08.746 END TEST guess_driver 00:04:08.746 ************************************ 00:04:08.746 23:37:38 -- common/autotest_common.sh@10 -- # set +x 00:04:08.746 ************************************ 00:04:08.746 END TEST driver 00:04:08.746 ************************************ 00:04:08.746 00:04:08.746 real 0m13.416s 00:04:08.746 user 0m1.098s 00:04:08.746 sys 0m2.286s 00:04:08.746 23:37:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.746 23:37:38 -- common/autotest_common.sh@10 -- # set +x 00:04:08.746 23:37:38 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.746 23:37:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.746 23:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.746 23:37:38 -- common/autotest_common.sh@10 -- # set +x 00:04:08.746 ************************************ 00:04:08.746 START TEST devices 00:04:08.746 ************************************ 00:04:08.746 23:37:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.746 * Looking for test storage... 00:04:08.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.746 23:37:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:08.746 23:37:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:08.746 23:37:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:08.746 23:37:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:08.746 23:37:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:08.746 23:37:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:08.746 23:37:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:08.746 23:37:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:08.746 23:37:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:08.746 23:37:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.746 23:37:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:08.746 23:37:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:08.746 23:37:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:08.746 23:37:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:08.746 23:37:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:08.746 23:37:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:08.746 23:37:38 -- scripts/common.sh@344 -- # : 1 00:04:08.746 23:37:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:08.746 23:37:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.746 23:37:38 -- scripts/common.sh@364 -- # decimal 1 00:04:08.746 23:37:38 -- scripts/common.sh@352 -- # local d=1 00:04:08.746 23:37:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.746 23:37:38 -- scripts/common.sh@354 -- # echo 1 00:04:08.746 23:37:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:08.746 23:37:38 -- scripts/common.sh@365 -- # decimal 2 00:04:08.746 23:37:38 -- scripts/common.sh@352 -- # local d=2 00:04:08.746 23:37:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.746 23:37:38 -- scripts/common.sh@354 -- # echo 2 00:04:08.746 23:37:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:08.746 23:37:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:08.746 23:37:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:08.746 23:37:38 -- scripts/common.sh@367 -- # return 0 00:04:08.746 23:37:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.746 23:37:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:08.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.746 --rc genhtml_branch_coverage=1 00:04:08.746 --rc genhtml_function_coverage=1 00:04:08.746 --rc genhtml_legend=1 00:04:08.746 --rc geninfo_all_blocks=1 00:04:08.746 --rc geninfo_unexecuted_blocks=1 00:04:08.746 00:04:08.746 ' 00:04:08.746 23:37:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:08.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.746 --rc genhtml_branch_coverage=1 00:04:08.746 --rc genhtml_function_coverage=1 00:04:08.746 --rc genhtml_legend=1 00:04:08.746 --rc geninfo_all_blocks=1 00:04:08.746 --rc geninfo_unexecuted_blocks=1 00:04:08.746 00:04:08.746 ' 00:04:08.746 23:37:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:08.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.746 --rc genhtml_branch_coverage=1 00:04:08.746 --rc genhtml_function_coverage=1 00:04:08.746 --rc genhtml_legend=1 00:04:08.746 --rc geninfo_all_blocks=1 00:04:08.746 --rc geninfo_unexecuted_blocks=1 00:04:08.746 00:04:08.746 ' 00:04:08.746 23:37:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:08.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.746 --rc genhtml_branch_coverage=1 00:04:08.746 --rc genhtml_function_coverage=1 00:04:08.746 --rc genhtml_legend=1 00:04:08.746 --rc geninfo_all_blocks=1 00:04:08.746 --rc geninfo_unexecuted_blocks=1 00:04:08.746 00:04:08.746 ' 00:04:08.746 23:37:38 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.746 23:37:38 -- setup/devices.sh@192 -- # setup reset 00:04:08.746 23:37:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.746 23:37:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.356 23:37:39 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:09.356 23:37:39 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:09.356 23:37:39 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:09.356 23:37:39 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:09.356 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.356 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:09.356 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:09.356 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.356 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:09.356 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:09.356 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.356 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:09.356 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:09.356 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.356 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:09.356 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:09.356 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.356 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.356 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:09.356 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:09.356 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:09.357 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.357 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.357 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:09.357 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:09.357 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:09.357 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.357 23:37:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.357 23:37:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:09.357 23:37:39 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:09.357 23:37:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:09.357 23:37:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.357 23:37:39 -- setup/devices.sh@196 -- # blocks=() 00:04:09.357 23:37:39 -- setup/devices.sh@196 -- # declare -a blocks 00:04:09.357 23:37:39 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:09.357 23:37:39 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:09.357 23:37:39 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:09.357 23:37:39 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.357 23:37:39 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:09.357 23:37:39 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.357 23:37:39 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:09.357 23:37:39 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:09.357 23:37:39 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:09.357 23:37:39 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:09.357 23:37:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:09.357 No valid GPT data, bailing 00:04:09.357 23:37:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.357 23:37:40 -- scripts/common.sh@393 -- # pt= 00:04:09.357 23:37:40 -- scripts/common.sh@394 -- # return 1 00:04:09.357 23:37:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:09.357 23:37:40 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:09.357 23:37:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:09.357 23:37:40 -- setup/common.sh@80 -- # echo 1073741824 00:04:09.357 23:37:40 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:09.357 23:37:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.357 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:09.357 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:09.357 23:37:40 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:09.357 23:37:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:09.357 23:37:40 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:09.357 23:37:40 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:09.357 23:37:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:09.618 No valid GPT data, bailing 00:04:09.618 23:37:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:09.618 23:37:40 -- scripts/common.sh@393 -- # pt= 00:04:09.618 23:37:40 -- scripts/common.sh@394 -- # return 1 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:09.618 23:37:40 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:09.618 23:37:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:09.618 23:37:40 -- setup/common.sh@80 -- # echo 4294967296 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.618 23:37:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.618 23:37:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:09.618 23:37:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.618 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:09.618 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:09.618 23:37:40 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:09.618 23:37:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:09.618 23:37:40 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:09.618 23:37:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:09.618 No valid GPT data, bailing 00:04:09.618 23:37:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:09.618 23:37:40 -- scripts/common.sh@393 -- # pt= 00:04:09.618 23:37:40 -- scripts/common.sh@394 -- # return 1 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:09.618 23:37:40 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:09.618 23:37:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:09.618 23:37:40 -- setup/common.sh@80 -- # echo 4294967296 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.618 23:37:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.618 23:37:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:09.618 23:37:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.618 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:09.618 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:09.618 23:37:40 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:09.618 23:37:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:09.618 23:37:40 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:09.618 23:37:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:09.618 No valid GPT data, bailing 00:04:09.618 23:37:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:09.618 23:37:40 -- scripts/common.sh@393 -- # pt= 00:04:09.618 23:37:40 -- scripts/common.sh@394 -- # return 1 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:09.618 23:37:40 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:09.618 23:37:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:09.618 23:37:40 -- setup/common.sh@80 -- # echo 4294967296 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.618 23:37:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.618 23:37:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:09.618 23:37:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.618 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:09.618 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:09.618 23:37:40 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:09.618 23:37:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:09.618 23:37:40 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:09.618 23:37:40 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:09.618 23:37:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:09.879 No valid GPT data, bailing 00:04:09.879 23:37:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:09.879 23:37:40 -- scripts/common.sh@393 -- # pt= 00:04:09.879 23:37:40 -- scripts/common.sh@394 -- # return 1 00:04:09.879 23:37:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:09.879 23:37:40 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:09.879 23:37:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:09.879 23:37:40 -- setup/common.sh@80 -- # echo 6343335936 00:04:09.879 23:37:40 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:09.879 23:37:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.879 23:37:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:09.879 23:37:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.879 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:09.879 23:37:40 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:09.879 23:37:40 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:09.879 23:37:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:09.879 23:37:40 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:09.879 23:37:40 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:09.879 23:37:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:09.879 No valid GPT data, bailing 00:04:09.879 23:37:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:09.879 23:37:40 -- scripts/common.sh@393 -- # pt= 00:04:09.880 23:37:40 -- scripts/common.sh@394 -- # return 1 00:04:09.880 23:37:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:09.880 23:37:40 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:09.880 23:37:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:09.880 23:37:40 -- setup/common.sh@80 -- # echo 5368709120 00:04:09.880 23:37:40 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:09.880 23:37:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.880 23:37:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:09.880 23:37:40 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:09.880 23:37:40 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:09.880 23:37:40 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:09.880 23:37:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.880 23:37:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.880 23:37:40 -- common/autotest_common.sh@10 -- # set +x 00:04:09.880 ************************************ 00:04:09.880 START TEST nvme_mount 00:04:09.880 ************************************ 00:04:09.880 23:37:40 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:09.880 23:37:40 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:09.880 23:37:40 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:09.880 23:37:40 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.880 23:37:40 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.880 23:37:40 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:09.880 23:37:40 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:09.880 23:37:40 -- setup/common.sh@40 -- # local part_no=1 00:04:09.880 23:37:40 -- setup/common.sh@41 -- # local size=1073741824 00:04:09.880 23:37:40 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.880 23:37:40 -- setup/common.sh@44 -- # parts=() 00:04:09.880 23:37:40 -- setup/common.sh@44 -- # local parts 00:04:09.880 23:37:40 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.880 23:37:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.880 23:37:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.880 23:37:40 -- setup/common.sh@46 -- # (( part++ )) 00:04:09.880 23:37:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.880 23:37:40 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:09.880 23:37:40 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:09.880 23:37:40 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:10.824 Creating new GPT entries in memory. 00:04:10.824 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.824 other utilities. 00:04:10.824 23:37:41 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.824 23:37:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.824 23:37:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.824 23:37:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.824 23:37:41 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:12.210 Creating new GPT entries in memory. 00:04:12.210 The operation has completed successfully. 00:04:12.210 23:37:42 -- setup/common.sh@57 -- # (( part++ )) 00:04:12.210 23:37:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.210 23:37:42 -- setup/common.sh@62 -- # wait 53739 00:04:12.210 23:37:42 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.210 23:37:42 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:12.210 23:37:42 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.210 23:37:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:12.210 23:37:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:12.210 23:37:42 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.210 23:37:42 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.210 23:37:42 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:12.210 23:37:42 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:12.210 23:37:42 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.210 23:37:42 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.210 23:37:42 -- setup/devices.sh@53 -- # local found=0 00:04:12.210 23:37:42 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.210 23:37:42 -- setup/devices.sh@56 -- # : 00:04:12.210 23:37:42 -- setup/devices.sh@59 -- # local pci status 00:04:12.210 23:37:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.210 23:37:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:12.210 23:37:42 -- setup/devices.sh@47 -- # setup output config 00:04:12.210 23:37:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.210 23:37:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.210 23:37:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.210 23:37:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.471 23:37:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.472 23:37:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.472 23:37:43 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.472 23:37:43 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:12.472 23:37:43 -- setup/devices.sh@63 -- # found=1 00:04:12.472 23:37:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.472 23:37:43 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.472 23:37:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.733 23:37:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.733 23:37:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.733 23:37:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.733 23:37:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.994 23:37:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.994 23:37:43 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:12.994 23:37:43 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.994 23:37:43 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.994 23:37:43 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.994 23:37:43 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:12.994 23:37:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.994 23:37:43 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.994 23:37:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:12.994 23:37:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:12.994 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.994 23:37:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:12.994 23:37:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:13.256 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:13.256 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:13.256 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.256 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:13.256 23:37:43 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:13.256 23:37:43 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:13.256 23:37:43 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.256 23:37:43 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:13.256 23:37:43 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:13.256 23:37:43 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.256 23:37:43 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.256 23:37:43 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:13.256 23:37:43 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:13.256 23:37:43 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.256 23:37:43 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.256 23:37:43 -- setup/devices.sh@53 -- # local found=0 00:04:13.256 23:37:43 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.256 23:37:43 -- setup/devices.sh@56 -- # : 00:04:13.256 23:37:43 -- setup/devices.sh@59 -- # local pci status 00:04:13.256 23:37:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.256 23:37:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:13.256 23:37:43 -- setup/devices.sh@47 -- # setup output config 00:04:13.256 23:37:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.256 23:37:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.517 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:13.517 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.517 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:13.517 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:13.778 23:37:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:13.778 23:37:44 -- setup/devices.sh@63 -- # found=1 00:04:13.778 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:13.778 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.039 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.039 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.039 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.039 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.039 23:37:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.039 23:37:44 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:14.039 23:37:44 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.039 23:37:44 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.039 23:37:44 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:14.039 23:37:44 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.039 23:37:44 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:14.039 23:37:44 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:14.039 23:37:44 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:14.039 23:37:44 -- setup/devices.sh@50 -- # local mount_point= 00:04:14.039 23:37:44 -- setup/devices.sh@51 -- # local test_file= 00:04:14.039 23:37:44 -- setup/devices.sh@53 -- # local found=0 00:04:14.039 23:37:44 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.039 23:37:44 -- setup/devices.sh@59 -- # local pci status 00:04:14.039 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.039 23:37:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:14.039 23:37:44 -- setup/devices.sh@47 -- # setup output config 00:04:14.039 23:37:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.039 23:37:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.301 23:37:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.301 23:37:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.301 23:37:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.301 23:37:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.871 23:37:45 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.871 23:37:45 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:14.871 23:37:45 -- setup/devices.sh@63 -- # found=1 00:04:14.871 23:37:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.871 23:37:45 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.871 23:37:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.871 23:37:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.871 23:37:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.871 23:37:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.871 23:37:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.131 23:37:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.131 23:37:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.131 23:37:45 -- setup/devices.sh@68 -- # return 0 00:04:15.131 23:37:45 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:15.131 23:37:45 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.131 23:37:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:15.131 23:37:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:15.131 23:37:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:15.131 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.131 00:04:15.131 ************************************ 00:04:15.131 END TEST nvme_mount 00:04:15.131 ************************************ 00:04:15.131 real 0m5.169s 00:04:15.131 user 0m0.988s 00:04:15.131 sys 0m1.407s 00:04:15.131 23:37:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.131 23:37:45 -- common/autotest_common.sh@10 -- # set +x 00:04:15.131 23:37:45 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:15.131 23:37:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.132 23:37:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.132 23:37:45 -- common/autotest_common.sh@10 -- # set +x 00:04:15.132 ************************************ 00:04:15.132 START TEST dm_mount 00:04:15.132 ************************************ 00:04:15.132 23:37:45 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:15.132 23:37:45 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:15.132 23:37:45 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:15.132 23:37:45 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:15.132 23:37:45 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:15.132 23:37:45 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:15.132 23:37:45 -- setup/common.sh@40 -- # local part_no=2 00:04:15.132 23:37:45 -- setup/common.sh@41 -- # local size=1073741824 00:04:15.132 23:37:45 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.132 23:37:45 -- setup/common.sh@44 -- # parts=() 00:04:15.132 23:37:45 -- setup/common.sh@44 -- # local parts 00:04:15.132 23:37:45 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.132 23:37:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.132 23:37:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.132 23:37:45 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.132 23:37:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.132 23:37:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.132 23:37:45 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.132 23:37:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.132 23:37:45 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:15.132 23:37:45 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:15.132 23:37:45 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:16.074 Creating new GPT entries in memory. 00:04:16.074 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.074 other utilities. 00:04:16.074 23:37:46 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.074 23:37:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.074 23:37:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.074 23:37:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.074 23:37:46 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:17.460 Creating new GPT entries in memory. 00:04:17.460 The operation has completed successfully. 00:04:17.460 23:37:47 -- setup/common.sh@57 -- # (( part++ )) 00:04:17.460 23:37:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.460 23:37:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.460 23:37:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.460 23:37:47 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:18.402 The operation has completed successfully. 00:04:18.402 23:37:48 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.402 23:37:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.402 23:37:48 -- setup/common.sh@62 -- # wait 54367 00:04:18.402 23:37:48 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:18.402 23:37:48 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.402 23:37:48 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.402 23:37:48 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:18.402 23:37:48 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:18.402 23:37:48 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.402 23:37:48 -- setup/devices.sh@161 -- # break 00:04:18.402 23:37:48 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.402 23:37:48 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:18.402 23:37:48 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:18.402 23:37:48 -- setup/devices.sh@166 -- # dm=dm-0 00:04:18.402 23:37:48 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:18.402 23:37:48 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:18.402 23:37:48 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.402 23:37:48 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:18.402 23:37:48 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.402 23:37:48 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.402 23:37:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:18.402 23:37:48 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.402 23:37:48 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.402 23:37:48 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:18.402 23:37:48 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:18.402 23:37:48 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.402 23:37:48 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.402 23:37:48 -- setup/devices.sh@53 -- # local found=0 00:04:18.402 23:37:48 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.402 23:37:48 -- setup/devices.sh@56 -- # : 00:04:18.402 23:37:48 -- setup/devices.sh@59 -- # local pci status 00:04:18.402 23:37:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:18.402 23:37:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.402 23:37:48 -- setup/devices.sh@47 -- # setup output config 00:04:18.402 23:37:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.402 23:37:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.402 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:18.402 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.663 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:18.663 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.923 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:18.923 23:37:49 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:18.923 23:37:49 -- setup/devices.sh@63 -- # found=1 00:04:18.923 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.923 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:18.923 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.923 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:18.923 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.184 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.184 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.184 23:37:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.184 23:37:49 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:19.184 23:37:49 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.184 23:37:49 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.184 23:37:49 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.184 23:37:49 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.184 23:37:49 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:19.184 23:37:49 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:19.184 23:37:49 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:19.184 23:37:49 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.184 23:37:49 -- setup/devices.sh@51 -- # local test_file= 00:04:19.185 23:37:49 -- setup/devices.sh@53 -- # local found=0 00:04:19.185 23:37:49 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.185 23:37:49 -- setup/devices.sh@59 -- # local pci status 00:04:19.185 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.185 23:37:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:19.185 23:37:49 -- setup/devices.sh@47 -- # setup output config 00:04:19.185 23:37:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.185 23:37:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.185 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.185 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.445 23:37:49 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.445 23:37:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.707 23:37:50 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.707 23:37:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:19.707 23:37:50 -- setup/devices.sh@63 -- # found=1 00:04:19.707 23:37:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.707 23:37:50 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.707 23:37:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.707 23:37:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.707 23:37:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.707 23:37:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:19.707 23:37:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.967 23:37:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.967 23:37:50 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:19.967 23:37:50 -- setup/devices.sh@68 -- # return 0 00:04:19.967 23:37:50 -- setup/devices.sh@187 -- # cleanup_dm 00:04:19.967 23:37:50 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.967 23:37:50 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:19.967 23:37:50 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:19.967 23:37:50 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:19.967 23:37:50 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:19.967 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.967 23:37:50 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:19.967 23:37:50 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:19.967 00:04:19.967 real 0m4.811s 00:04:19.967 user 0m0.673s 00:04:19.967 sys 0m0.855s 00:04:19.968 23:37:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.968 ************************************ 00:04:19.968 END TEST dm_mount 00:04:19.968 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:04:19.968 ************************************ 00:04:19.968 23:37:50 -- setup/devices.sh@1 -- # cleanup 00:04:19.968 23:37:50 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:19.968 23:37:50 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.968 23:37:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:19.968 23:37:50 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:19.968 23:37:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:19.968 23:37:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:20.228 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.228 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.228 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.228 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:20.228 23:37:50 -- setup/devices.sh@12 -- # cleanup_dm 00:04:20.228 23:37:50 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:20.228 23:37:50 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.228 23:37:50 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:20.228 23:37:50 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:20.228 23:37:50 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:20.228 23:37:50 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:20.228 ************************************ 00:04:20.228 END TEST devices 00:04:20.228 ************************************ 00:04:20.228 00:04:20.228 real 0m12.248s 00:04:20.228 user 0m2.530s 00:04:20.228 sys 0m3.029s 00:04:20.228 23:37:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.228 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.489 00:04:20.489 real 0m43.526s 00:04:20.489 user 0m8.480s 00:04:20.489 sys 0m12.251s 00:04:20.489 ************************************ 00:04:20.489 23:37:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.490 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.490 END TEST setup.sh 00:04:20.490 ************************************ 00:04:20.490 23:37:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:20.490 Hugepages 00:04:20.490 node hugesize free / total 00:04:20.490 node0 1048576kB 0 / 0 00:04:20.490 node0 2048kB 2048 / 2048 00:04:20.490 00:04:20.490 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.750 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.750 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:20.750 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:20.750 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:21.010 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:21.010 23:37:51 -- spdk/autotest.sh@128 -- # uname -s 00:04:21.010 23:37:51 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:21.010 23:37:51 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:21.010 23:37:51 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.951 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.951 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.951 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.951 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.951 23:37:52 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:23.394 23:37:53 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:23.394 23:37:53 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:23.394 23:37:53 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.394 23:37:53 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:23.394 23:37:53 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:23.394 23:37:53 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:23.394 23:37:53 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.394 23:37:53 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:23.394 23:37:53 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:23.394 23:37:53 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:23.394 23:37:53 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:23.394 23:37:53 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.652 Waiting for block devices as requested 00:04:23.652 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.652 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.652 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.909 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.172 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:29.172 23:37:59 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:29.172 23:37:59 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1552 -- # continue 00:04:29.172 23:37:59 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:29.172 23:37:59 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1552 -- # continue 00:04:29.172 23:37:59 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:29.172 23:37:59 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:29.172 23:37:59 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:29.172 23:37:59 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1552 -- # continue 00:04:29.172 23:37:59 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:29.172 23:37:59 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:29.172 23:37:59 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:29.172 23:37:59 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:29.172 23:37:59 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:29.172 23:37:59 -- common/autotest_common.sh@1552 -- # continue 00:04:29.172 23:37:59 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:29.172 23:37:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.172 23:37:59 -- common/autotest_common.sh@10 -- # set +x 00:04:29.172 23:37:59 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:29.172 23:37:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.172 23:37:59 -- common/autotest_common.sh@10 -- # set +x 00:04:29.172 23:37:59 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.737 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.995 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.995 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.995 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.995 23:38:00 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:29.995 23:38:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.995 23:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.995 23:38:00 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:29.995 23:38:00 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:29.995 23:38:00 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.995 23:38:00 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:29.995 23:38:00 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:29.995 23:38:00 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:29.995 23:38:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:29.995 23:38:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:29.995 23:38:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.995 23:38:00 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.995 23:38:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:29.995 23:38:00 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:29.995 23:38:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:29.995 23:38:00 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:29.995 23:38:00 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:29.995 23:38:00 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:29.995 23:38:00 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.995 23:38:00 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:29.996 23:38:00 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:29.996 23:38:00 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:29.996 23:38:00 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.996 23:38:00 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:29.996 23:38:00 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:29.996 23:38:00 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:29.996 23:38:00 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.996 23:38:00 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:29.996 23:38:00 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:29.996 23:38:00 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:29.996 23:38:00 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.996 23:38:00 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:29.996 23:38:00 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:29.996 23:38:00 -- common/autotest_common.sh@1588 -- # return 0 00:04:29.996 23:38:00 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:29.996 23:38:00 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:29.996 23:38:00 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:29.996 23:38:00 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:29.996 23:38:00 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:29.996 23:38:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.996 23:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.996 23:38:00 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.996 23:38:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.996 23:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.996 23:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.996 ************************************ 00:04:29.996 START TEST env 00:04:29.996 ************************************ 00:04:29.996 23:38:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.254 * Looking for test storage... 00:04:30.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:30.254 23:38:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:30.254 23:38:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.254 23:38:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:30.254 23:38:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.254 23:38:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.254 23:38:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.254 23:38:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.254 23:38:00 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.254 23:38:00 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.254 23:38:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.254 23:38:00 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.254 23:38:00 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.254 23:38:00 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.254 23:38:00 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.254 23:38:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.254 23:38:00 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.254 23:38:00 -- scripts/common.sh@344 -- # : 1 00:04:30.254 23:38:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.254 23:38:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.254 23:38:00 -- scripts/common.sh@364 -- # decimal 1 00:04:30.254 23:38:00 -- scripts/common.sh@352 -- # local d=1 00:04:30.254 23:38:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.254 23:38:00 -- scripts/common.sh@354 -- # echo 1 00:04:30.254 23:38:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.254 23:38:00 -- scripts/common.sh@365 -- # decimal 2 00:04:30.254 23:38:00 -- scripts/common.sh@352 -- # local d=2 00:04:30.254 23:38:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.254 23:38:00 -- scripts/common.sh@354 -- # echo 2 00:04:30.254 23:38:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.255 23:38:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.255 23:38:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.255 23:38:00 -- scripts/common.sh@367 -- # return 0 00:04:30.255 23:38:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.255 23:38:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.255 --rc genhtml_branch_coverage=1 00:04:30.255 --rc genhtml_function_coverage=1 00:04:30.255 --rc genhtml_legend=1 00:04:30.255 --rc geninfo_all_blocks=1 00:04:30.255 --rc geninfo_unexecuted_blocks=1 00:04:30.255 00:04:30.255 ' 00:04:30.255 23:38:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.255 --rc genhtml_branch_coverage=1 00:04:30.255 --rc genhtml_function_coverage=1 00:04:30.255 --rc genhtml_legend=1 00:04:30.255 --rc geninfo_all_blocks=1 00:04:30.255 --rc geninfo_unexecuted_blocks=1 00:04:30.255 00:04:30.255 ' 00:04:30.255 23:38:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.255 --rc genhtml_branch_coverage=1 00:04:30.255 --rc genhtml_function_coverage=1 00:04:30.255 --rc genhtml_legend=1 00:04:30.255 --rc geninfo_all_blocks=1 00:04:30.255 --rc geninfo_unexecuted_blocks=1 00:04:30.255 00:04:30.255 ' 00:04:30.255 23:38:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.255 --rc genhtml_branch_coverage=1 00:04:30.255 --rc genhtml_function_coverage=1 00:04:30.255 --rc genhtml_legend=1 00:04:30.255 --rc geninfo_all_blocks=1 00:04:30.255 --rc geninfo_unexecuted_blocks=1 00:04:30.255 00:04:30.255 ' 00:04:30.255 23:38:00 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.255 23:38:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.255 23:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.255 23:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.255 ************************************ 00:04:30.255 START TEST env_memory 00:04:30.255 ************************************ 00:04:30.255 23:38:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.255 00:04:30.255 00:04:30.255 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.255 http://cunit.sourceforge.net/ 00:04:30.255 00:04:30.255 00:04:30.255 Suite: memory 00:04:30.255 Test: alloc and free memory map ...[2024-12-13 23:38:00.905142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.255 passed 00:04:30.255 Test: mem map translation ...[2024-12-13 23:38:00.943989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.255 [2024-12-13 23:38:00.944102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.255 [2024-12-13 23:38:00.944219] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.255 [2024-12-13 23:38:00.944575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.513 passed 00:04:30.513 Test: mem map registration ...[2024-12-13 23:38:01.014290] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:30.513 [2024-12-13 23:38:01.014328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:30.513 passed 00:04:30.513 Test: mem map adjacent registrations ...passed 00:04:30.513 00:04:30.513 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.513 suites 1 1 n/a 0 0 00:04:30.513 tests 4 4 4 0 0 00:04:30.513 asserts 152 152 152 0 n/a 00:04:30.513 00:04:30.513 Elapsed time = 0.235 seconds 00:04:30.513 00:04:30.513 real 0m0.270s 00:04:30.513 user 0m0.243s 00:04:30.513 sys 0m0.020s 00:04:30.513 23:38:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.513 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:30.513 ************************************ 00:04:30.513 END TEST env_memory 00:04:30.513 ************************************ 00:04:30.513 23:38:01 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.513 23:38:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.513 23:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.513 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:30.513 ************************************ 00:04:30.513 START TEST env_vtophys 00:04:30.513 ************************************ 00:04:30.513 23:38:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.513 EAL: lib.eal log level changed from notice to debug 00:04:30.513 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.513 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.513 EAL: Maximum logical cores by configuration: 128 00:04:30.513 EAL: Detected CPU lcores: 10 00:04:30.513 EAL: Detected NUMA nodes: 1 00:04:30.513 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:30.513 EAL: Detected shared linkage of DPDK 00:04:30.513 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.513 EAL: Selected IOVA mode 'PA' 00:04:30.513 EAL: Probing VFIO support... 00:04:30.513 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.513 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.513 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.513 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.513 EAL: Setting up physically contiguous memory... 00:04:30.513 EAL: Setting maximum number of open files to 524288 00:04:30.513 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.513 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.513 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.513 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.513 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.513 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.513 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.513 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.513 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.513 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.513 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.513 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.513 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.513 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.513 EAL: Hugepages will be freed exactly as allocated. 00:04:30.513 EAL: No shared files mode enabled, IPC is disabled 00:04:30.513 EAL: No shared files mode enabled, IPC is disabled 00:04:30.808 EAL: TSC frequency is ~2600000 KHz 00:04:30.808 EAL: Main lcore 0 is ready (tid=7f866a2cda40;cpuset=[0]) 00:04:30.808 EAL: Trying to obtain current memory policy. 00:04:30.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.808 EAL: Restoring previous memory policy: 0 00:04:30.808 EAL: request: mp_malloc_sync 00:04:30.808 EAL: No shared files mode enabled, IPC is disabled 00:04:30.808 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.808 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.808 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.808 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.808 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.808 00:04:30.808 00:04:30.808 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.808 http://cunit.sourceforge.net/ 00:04:30.808 00:04:30.808 00:04:30.808 Suite: components_suite 00:04:31.065 Test: vtophys_malloc_test ...passed 00:04:31.065 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.065 EAL: Restoring previous memory policy: 4 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.065 EAL: Trying to obtain current memory policy. 00:04:31.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.065 EAL: Restoring previous memory policy: 4 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.065 EAL: Trying to obtain current memory policy. 00:04:31.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.065 EAL: Restoring previous memory policy: 4 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.065 EAL: Trying to obtain current memory policy. 00:04:31.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.066 EAL: Restoring previous memory policy: 4 00:04:31.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.066 EAL: request: mp_malloc_sync 00:04:31.066 EAL: No shared files mode enabled, IPC is disabled 00:04:31.066 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.066 EAL: request: mp_malloc_sync 00:04:31.066 EAL: No shared files mode enabled, IPC is disabled 00:04:31.066 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.066 EAL: Trying to obtain current memory policy. 00:04:31.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.066 EAL: Restoring previous memory policy: 4 00:04:31.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.066 EAL: request: mp_malloc_sync 00:04:31.066 EAL: No shared files mode enabled, IPC is disabled 00:04:31.066 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.066 EAL: request: mp_malloc_sync 00:04:31.066 EAL: No shared files mode enabled, IPC is disabled 00:04:31.066 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.066 EAL: Trying to obtain current memory policy. 00:04:31.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.324 EAL: Restoring previous memory policy: 4 00:04:31.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.324 EAL: request: mp_malloc_sync 00:04:31.324 EAL: No shared files mode enabled, IPC is disabled 00:04:31.324 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.324 EAL: request: mp_malloc_sync 00:04:31.324 EAL: No shared files mode enabled, IPC is disabled 00:04:31.324 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.324 EAL: Trying to obtain current memory policy. 00:04:31.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.324 EAL: Restoring previous memory policy: 4 00:04:31.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.324 EAL: request: mp_malloc_sync 00:04:31.324 EAL: No shared files mode enabled, IPC is disabled 00:04:31.324 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.582 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.582 EAL: request: mp_malloc_sync 00:04:31.582 EAL: No shared files mode enabled, IPC is disabled 00:04:31.582 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.582 EAL: Trying to obtain current memory policy. 00:04:31.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.582 EAL: Restoring previous memory policy: 4 00:04:31.582 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.582 EAL: request: mp_malloc_sync 00:04:31.582 EAL: No shared files mode enabled, IPC is disabled 00:04:31.582 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.099 EAL: request: mp_malloc_sync 00:04:32.099 EAL: No shared files mode enabled, IPC is disabled 00:04:32.099 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.099 EAL: Trying to obtain current memory policy. 00:04:32.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.357 EAL: Restoring previous memory policy: 4 00:04:32.357 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.357 EAL: request: mp_malloc_sync 00:04:32.357 EAL: No shared files mode enabled, IPC is disabled 00:04:32.357 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.873 EAL: request: mp_malloc_sync 00:04:32.873 EAL: No shared files mode enabled, IPC is disabled 00:04:32.873 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.131 EAL: Trying to obtain current memory policy. 00:04:33.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.389 EAL: Restoring previous memory policy: 4 00:04:33.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.389 EAL: request: mp_malloc_sync 00:04:33.389 EAL: No shared files mode enabled, IPC is disabled 00:04:33.389 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.324 EAL: request: mp_malloc_sync 00:04:34.324 EAL: No shared files mode enabled, IPC is disabled 00:04:34.324 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.258 passed 00:04:35.258 00:04:35.258 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.258 suites 1 1 n/a 0 0 00:04:35.258 tests 2 2 2 0 0 00:04:35.258 asserts 5334 5334 5334 0 n/a 00:04:35.258 00:04:35.258 Elapsed time = 4.305 seconds 00:04:35.258 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.258 EAL: request: mp_malloc_sync 00:04:35.258 EAL: No shared files mode enabled, IPC is disabled 00:04:35.258 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.258 EAL: No shared files mode enabled, IPC is disabled 00:04:35.258 EAL: No shared files mode enabled, IPC is disabled 00:04:35.258 EAL: No shared files mode enabled, IPC is disabled 00:04:35.258 00:04:35.258 real 0m4.559s 00:04:35.258 user 0m3.797s 00:04:35.258 sys 0m0.613s 00:04:35.258 ************************************ 00:04:35.258 END TEST env_vtophys 00:04:35.258 ************************************ 00:04:35.258 23:38:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.258 23:38:05 -- common/autotest_common.sh@10 -- # set +x 00:04:35.258 23:38:05 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.258 23:38:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.258 23:38:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.258 23:38:05 -- common/autotest_common.sh@10 -- # set +x 00:04:35.258 ************************************ 00:04:35.258 START TEST env_pci 00:04:35.258 ************************************ 00:04:35.258 23:38:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.258 00:04:35.258 00:04:35.258 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.258 http://cunit.sourceforge.net/ 00:04:35.258 00:04:35.258 00:04:35.258 Suite: pci 00:04:35.259 Test: pci_hook ...[2024-12-13 23:38:05.789215] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56066 has claimed it 00:04:35.259 passed 00:04:35.259 00:04:35.259 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.259 suites 1 1 n/a 0 0 00:04:35.259 tests 1 1 1 0 0 00:04:35.259 asserts 25 25 25 0 n/a 00:04:35.259 00:04:35.259 Elapsed time = 0.008 seconds 00:04:35.259 EAL: Cannot find device (10000:00:01.0) 00:04:35.259 EAL: Failed to attach device on primary process 00:04:35.259 ************************************ 00:04:35.259 END TEST env_pci 00:04:35.259 ************************************ 00:04:35.259 00:04:35.259 real 0m0.065s 00:04:35.259 user 0m0.027s 00:04:35.259 sys 0m0.037s 00:04:35.259 23:38:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.259 23:38:05 -- common/autotest_common.sh@10 -- # set +x 00:04:35.259 23:38:05 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.259 23:38:05 -- env/env.sh@15 -- # uname 00:04:35.259 23:38:05 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.259 23:38:05 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.259 23:38:05 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.259 23:38:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:35.259 23:38:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.259 23:38:05 -- common/autotest_common.sh@10 -- # set +x 00:04:35.259 ************************************ 00:04:35.259 START TEST env_dpdk_post_init 00:04:35.259 ************************************ 00:04:35.259 23:38:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.259 EAL: Detected CPU lcores: 10 00:04:35.259 EAL: Detected NUMA nodes: 1 00:04:35.259 EAL: Detected shared linkage of DPDK 00:04:35.259 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.259 EAL: Selected IOVA mode 'PA' 00:04:35.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:35.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:35.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:35.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:35.517 Starting DPDK initialization... 00:04:35.517 Starting SPDK post initialization... 00:04:35.517 SPDK NVMe probe 00:04:35.517 Attaching to 0000:00:06.0 00:04:35.517 Attaching to 0000:00:07.0 00:04:35.517 Attaching to 0000:00:08.0 00:04:35.517 Attaching to 0000:00:09.0 00:04:35.517 Attached to 0000:00:06.0 00:04:35.517 Attached to 0000:00:07.0 00:04:35.517 Attached to 0000:00:09.0 00:04:35.517 Attached to 0000:00:08.0 00:04:35.517 Cleaning up... 00:04:35.517 00:04:35.517 real 0m0.235s 00:04:35.517 user 0m0.066s 00:04:35.517 sys 0m0.072s 00:04:35.517 23:38:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.517 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:35.517 ************************************ 00:04:35.517 END TEST env_dpdk_post_init 00:04:35.517 ************************************ 00:04:35.517 23:38:06 -- env/env.sh@26 -- # uname 00:04:35.517 23:38:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.517 23:38:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.517 23:38:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.517 23:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.517 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:35.517 ************************************ 00:04:35.517 START TEST env_mem_callbacks 00:04:35.517 ************************************ 00:04:35.517 23:38:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.517 EAL: Detected CPU lcores: 10 00:04:35.517 EAL: Detected NUMA nodes: 1 00:04:35.517 EAL: Detected shared linkage of DPDK 00:04:35.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.517 EAL: Selected IOVA mode 'PA' 00:04:35.776 00:04:35.776 00:04:35.776 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.776 http://cunit.sourceforge.net/ 00:04:35.776 00:04:35.776 00:04:35.776 Suite: memory 00:04:35.776 Test: test ... 00:04:35.776 register 0x200000200000 2097152 00:04:35.776 malloc 3145728 00:04:35.776 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.776 register 0x200000400000 4194304 00:04:35.776 buf 0x2000004fffc0 len 3145728 PASSED 00:04:35.776 malloc 64 00:04:35.776 buf 0x2000004ffec0 len 64 PASSED 00:04:35.776 malloc 4194304 00:04:35.776 register 0x200000800000 6291456 00:04:35.776 buf 0x2000009fffc0 len 4194304 PASSED 00:04:35.776 free 0x2000004fffc0 3145728 00:04:35.776 free 0x2000004ffec0 64 00:04:35.776 unregister 0x200000400000 4194304 PASSED 00:04:35.776 free 0x2000009fffc0 4194304 00:04:35.776 unregister 0x200000800000 6291456 PASSED 00:04:35.776 malloc 8388608 00:04:35.776 register 0x200000400000 10485760 00:04:35.776 buf 0x2000005fffc0 len 8388608 PASSED 00:04:35.776 free 0x2000005fffc0 8388608 00:04:35.776 unregister 0x200000400000 10485760 PASSED 00:04:35.776 passed 00:04:35.776 00:04:35.776 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.776 suites 1 1 n/a 0 0 00:04:35.776 tests 1 1 1 0 0 00:04:35.776 asserts 15 15 15 0 n/a 00:04:35.776 00:04:35.776 Elapsed time = 0.046 seconds 00:04:35.776 ************************************ 00:04:35.776 END TEST env_mem_callbacks 00:04:35.776 ************************************ 00:04:35.776 00:04:35.776 real 0m0.211s 00:04:35.776 user 0m0.066s 00:04:35.776 sys 0m0.043s 00:04:35.776 23:38:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.776 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:35.776 ************************************ 00:04:35.776 END TEST env 00:04:35.776 ************************************ 00:04:35.776 00:04:35.776 real 0m5.682s 00:04:35.776 user 0m4.344s 00:04:35.776 sys 0m0.987s 00:04:35.776 23:38:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.776 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:35.776 23:38:06 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.776 23:38:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.776 23:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.776 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:35.776 ************************************ 00:04:35.776 START TEST rpc 00:04:35.776 ************************************ 00:04:35.776 23:38:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.776 * Looking for test storage... 00:04:35.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.776 23:38:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.776 23:38:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.776 23:38:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:36.035 23:38:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:36.035 23:38:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:36.035 23:38:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:36.035 23:38:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:36.035 23:38:06 -- scripts/common.sh@335 -- # IFS=.-: 00:04:36.035 23:38:06 -- scripts/common.sh@335 -- # read -ra ver1 00:04:36.035 23:38:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.035 23:38:06 -- scripts/common.sh@336 -- # read -ra ver2 00:04:36.035 23:38:06 -- scripts/common.sh@337 -- # local 'op=<' 00:04:36.035 23:38:06 -- scripts/common.sh@339 -- # ver1_l=2 00:04:36.035 23:38:06 -- scripts/common.sh@340 -- # ver2_l=1 00:04:36.035 23:38:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:36.035 23:38:06 -- scripts/common.sh@343 -- # case "$op" in 00:04:36.035 23:38:06 -- scripts/common.sh@344 -- # : 1 00:04:36.035 23:38:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:36.035 23:38:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.035 23:38:06 -- scripts/common.sh@364 -- # decimal 1 00:04:36.035 23:38:06 -- scripts/common.sh@352 -- # local d=1 00:04:36.035 23:38:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.035 23:38:06 -- scripts/common.sh@354 -- # echo 1 00:04:36.035 23:38:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:36.035 23:38:06 -- scripts/common.sh@365 -- # decimal 2 00:04:36.035 23:38:06 -- scripts/common.sh@352 -- # local d=2 00:04:36.035 23:38:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.035 23:38:06 -- scripts/common.sh@354 -- # echo 2 00:04:36.035 23:38:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:36.035 23:38:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:36.035 23:38:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:36.035 23:38:06 -- scripts/common.sh@367 -- # return 0 00:04:36.035 23:38:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.035 23:38:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:36.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.035 --rc genhtml_branch_coverage=1 00:04:36.035 --rc genhtml_function_coverage=1 00:04:36.035 --rc genhtml_legend=1 00:04:36.035 --rc geninfo_all_blocks=1 00:04:36.035 --rc geninfo_unexecuted_blocks=1 00:04:36.035 00:04:36.035 ' 00:04:36.035 23:38:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:36.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.035 --rc genhtml_branch_coverage=1 00:04:36.035 --rc genhtml_function_coverage=1 00:04:36.035 --rc genhtml_legend=1 00:04:36.035 --rc geninfo_all_blocks=1 00:04:36.035 --rc geninfo_unexecuted_blocks=1 00:04:36.035 00:04:36.035 ' 00:04:36.035 23:38:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:36.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.035 --rc genhtml_branch_coverage=1 00:04:36.035 --rc genhtml_function_coverage=1 00:04:36.035 --rc genhtml_legend=1 00:04:36.035 --rc geninfo_all_blocks=1 00:04:36.035 --rc geninfo_unexecuted_blocks=1 00:04:36.035 00:04:36.035 ' 00:04:36.035 23:38:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:36.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.035 --rc genhtml_branch_coverage=1 00:04:36.035 --rc genhtml_function_coverage=1 00:04:36.035 --rc genhtml_legend=1 00:04:36.035 --rc geninfo_all_blocks=1 00:04:36.035 --rc geninfo_unexecuted_blocks=1 00:04:36.035 00:04:36.035 ' 00:04:36.035 23:38:06 -- rpc/rpc.sh@65 -- # spdk_pid=56188 00:04:36.035 23:38:06 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.035 23:38:06 -- rpc/rpc.sh@67 -- # waitforlisten 56188 00:04:36.035 23:38:06 -- common/autotest_common.sh@829 -- # '[' -z 56188 ']' 00:04:36.035 23:38:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.035 23:38:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.035 23:38:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.035 23:38:06 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:36.035 23:38:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.035 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:36.035 [2024-12-13 23:38:06.642177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:36.035 [2024-12-13 23:38:06.642661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56188 ] 00:04:36.293 [2024-12-13 23:38:06.788949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.293 [2024-12-13 23:38:06.964162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.293 [2024-12-13 23:38:06.964347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.293 [2024-12-13 23:38:06.964363] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56188' to capture a snapshot of events at runtime. 00:04:36.293 [2024-12-13 23:38:06.964373] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56188 for offline analysis/debug. 00:04:36.293 [2024-12-13 23:38:06.964405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.677 23:38:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.677 23:38:08 -- common/autotest_common.sh@862 -- # return 0 00:04:37.677 23:38:08 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.677 23:38:08 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.677 23:38:08 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.677 23:38:08 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.677 23:38:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.677 23:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 ************************************ 00:04:37.677 START TEST rpc_integrity 00:04:37.677 ************************************ 00:04:37.677 23:38:08 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:37.677 23:38:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.677 23:38:08 -- rpc/rpc.sh@13 -- # jq length 00:04:37.677 23:38:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.677 23:38:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.677 23:38:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.677 { 00:04:37.677 "name": "Malloc0", 00:04:37.677 "aliases": [ 00:04:37.677 "b5843832-f0d1-4009-ad95-15630978a69b" 00:04:37.677 ], 00:04:37.677 "product_name": "Malloc disk", 00:04:37.677 "block_size": 512, 00:04:37.677 "num_blocks": 16384, 00:04:37.677 "uuid": "b5843832-f0d1-4009-ad95-15630978a69b", 00:04:37.677 "assigned_rate_limits": { 00:04:37.677 "rw_ios_per_sec": 0, 00:04:37.677 "rw_mbytes_per_sec": 0, 00:04:37.677 "r_mbytes_per_sec": 0, 00:04:37.677 "w_mbytes_per_sec": 0 00:04:37.677 }, 00:04:37.677 "claimed": false, 00:04:37.677 "zoned": false, 00:04:37.677 "supported_io_types": { 00:04:37.677 "read": true, 00:04:37.677 "write": true, 00:04:37.677 "unmap": true, 00:04:37.677 "write_zeroes": true, 00:04:37.677 "flush": true, 00:04:37.677 "reset": true, 00:04:37.677 "compare": false, 00:04:37.677 "compare_and_write": false, 00:04:37.677 "abort": true, 00:04:37.677 "nvme_admin": false, 00:04:37.677 "nvme_io": false 00:04:37.677 }, 00:04:37.677 "memory_domains": [ 00:04:37.677 { 00:04:37.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.677 "dma_device_type": 2 00:04:37.677 } 00:04:37.677 ], 00:04:37.677 "driver_specific": {} 00:04:37.677 } 00:04:37.677 ]' 00:04:37.677 23:38:08 -- rpc/rpc.sh@17 -- # jq length 00:04:37.677 23:38:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.677 23:38:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 [2024-12-13 23:38:08.262003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.677 [2024-12-13 23:38:08.262152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.677 [2024-12-13 23:38:08.262179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:37.677 [2024-12-13 23:38:08.262191] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.677 [2024-12-13 23:38:08.264311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.677 [2024-12-13 23:38:08.264348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.677 Passthru0 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.677 { 00:04:37.677 "name": "Malloc0", 00:04:37.677 "aliases": [ 00:04:37.677 "b5843832-f0d1-4009-ad95-15630978a69b" 00:04:37.677 ], 00:04:37.677 "product_name": "Malloc disk", 00:04:37.677 "block_size": 512, 00:04:37.677 "num_blocks": 16384, 00:04:37.677 "uuid": "b5843832-f0d1-4009-ad95-15630978a69b", 00:04:37.677 "assigned_rate_limits": { 00:04:37.677 "rw_ios_per_sec": 0, 00:04:37.677 "rw_mbytes_per_sec": 0, 00:04:37.677 "r_mbytes_per_sec": 0, 00:04:37.677 "w_mbytes_per_sec": 0 00:04:37.677 }, 00:04:37.677 "claimed": true, 00:04:37.677 "claim_type": "exclusive_write", 00:04:37.677 "zoned": false, 00:04:37.677 "supported_io_types": { 00:04:37.677 "read": true, 00:04:37.677 "write": true, 00:04:37.677 "unmap": true, 00:04:37.677 "write_zeroes": true, 00:04:37.677 "flush": true, 00:04:37.677 "reset": true, 00:04:37.677 "compare": false, 00:04:37.677 "compare_and_write": false, 00:04:37.677 "abort": true, 00:04:37.677 "nvme_admin": false, 00:04:37.677 "nvme_io": false 00:04:37.677 }, 00:04:37.677 "memory_domains": [ 00:04:37.677 { 00:04:37.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.677 "dma_device_type": 2 00:04:37.677 } 00:04:37.677 ], 00:04:37.677 "driver_specific": {} 00:04:37.677 }, 00:04:37.677 { 00:04:37.677 "name": "Passthru0", 00:04:37.677 "aliases": [ 00:04:37.677 "6dfd950f-9536-526e-8502-fd7afb138e9e" 00:04:37.677 ], 00:04:37.677 "product_name": "passthru", 00:04:37.677 "block_size": 512, 00:04:37.677 "num_blocks": 16384, 00:04:37.677 "uuid": "6dfd950f-9536-526e-8502-fd7afb138e9e", 00:04:37.677 "assigned_rate_limits": { 00:04:37.677 "rw_ios_per_sec": 0, 00:04:37.677 "rw_mbytes_per_sec": 0, 00:04:37.677 "r_mbytes_per_sec": 0, 00:04:37.677 "w_mbytes_per_sec": 0 00:04:37.677 }, 00:04:37.677 "claimed": false, 00:04:37.677 "zoned": false, 00:04:37.677 "supported_io_types": { 00:04:37.677 "read": true, 00:04:37.677 "write": true, 00:04:37.677 "unmap": true, 00:04:37.677 "write_zeroes": true, 00:04:37.677 "flush": true, 00:04:37.677 "reset": true, 00:04:37.677 "compare": false, 00:04:37.677 "compare_and_write": false, 00:04:37.677 "abort": true, 00:04:37.677 "nvme_admin": false, 00:04:37.677 "nvme_io": false 00:04:37.677 }, 00:04:37.677 "memory_domains": [ 00:04:37.677 { 00:04:37.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.677 "dma_device_type": 2 00:04:37.677 } 00:04:37.677 ], 00:04:37.677 "driver_specific": { 00:04:37.677 "passthru": { 00:04:37.677 "name": "Passthru0", 00:04:37.677 "base_bdev_name": "Malloc0" 00:04:37.677 } 00:04:37.677 } 00:04:37.677 } 00:04:37.677 ]' 00:04:37.677 23:38:08 -- rpc/rpc.sh@21 -- # jq length 00:04:37.677 23:38:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.677 23:38:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.677 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.677 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.677 23:38:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.677 23:38:08 -- rpc/rpc.sh@26 -- # jq length 00:04:37.677 ************************************ 00:04:37.677 END TEST rpc_integrity 00:04:37.677 ************************************ 00:04:37.677 23:38:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.677 00:04:37.677 real 0m0.244s 00:04:37.677 user 0m0.127s 00:04:37.677 sys 0m0.029s 00:04:37.677 23:38:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.677 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.937 23:38:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.937 23:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 ************************************ 00:04:37.937 START TEST rpc_plugins 00:04:37.937 ************************************ 00:04:37.937 23:38:08 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:37.937 23:38:08 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.937 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.937 23:38:08 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.937 23:38:08 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.937 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.937 23:38:08 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.937 { 00:04:37.937 "name": "Malloc1", 00:04:37.937 "aliases": [ 00:04:37.937 "b08cbd27-1f48-432f-8d77-63f7d92d3307" 00:04:37.937 ], 00:04:37.937 "product_name": "Malloc disk", 00:04:37.937 "block_size": 4096, 00:04:37.937 "num_blocks": 256, 00:04:37.937 "uuid": "b08cbd27-1f48-432f-8d77-63f7d92d3307", 00:04:37.937 "assigned_rate_limits": { 00:04:37.937 "rw_ios_per_sec": 0, 00:04:37.937 "rw_mbytes_per_sec": 0, 00:04:37.937 "r_mbytes_per_sec": 0, 00:04:37.937 "w_mbytes_per_sec": 0 00:04:37.937 }, 00:04:37.937 "claimed": false, 00:04:37.937 "zoned": false, 00:04:37.937 "supported_io_types": { 00:04:37.937 "read": true, 00:04:37.937 "write": true, 00:04:37.937 "unmap": true, 00:04:37.937 "write_zeroes": true, 00:04:37.937 "flush": true, 00:04:37.937 "reset": true, 00:04:37.937 "compare": false, 00:04:37.937 "compare_and_write": false, 00:04:37.937 "abort": true, 00:04:37.937 "nvme_admin": false, 00:04:37.937 "nvme_io": false 00:04:37.937 }, 00:04:37.937 "memory_domains": [ 00:04:37.937 { 00:04:37.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.937 "dma_device_type": 2 00:04:37.937 } 00:04:37.937 ], 00:04:37.937 "driver_specific": {} 00:04:37.937 } 00:04:37.937 ]' 00:04:37.937 23:38:08 -- rpc/rpc.sh@32 -- # jq length 00:04:37.937 23:38:08 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.937 23:38:08 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.937 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.937 23:38:08 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.937 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.937 23:38:08 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.937 23:38:08 -- rpc/rpc.sh@36 -- # jq length 00:04:37.937 ************************************ 00:04:37.937 END TEST rpc_plugins 00:04:37.937 ************************************ 00:04:37.937 23:38:08 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.937 00:04:37.937 real 0m0.120s 00:04:37.937 user 0m0.058s 00:04:37.937 sys 0m0.018s 00:04:37.937 23:38:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.937 23:38:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.937 23:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 ************************************ 00:04:37.937 START TEST rpc_trace_cmd_test 00:04:37.937 ************************************ 00:04:37.937 23:38:08 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:37.937 23:38:08 -- rpc/rpc.sh@40 -- # local info 00:04:37.937 23:38:08 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.937 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.937 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.937 23:38:08 -- rpc/rpc.sh@42 -- # info='{ 00:04:37.937 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56188", 00:04:37.937 "tpoint_group_mask": "0x8", 00:04:37.937 "iscsi_conn": { 00:04:37.937 "mask": "0x2", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "scsi": { 00:04:37.937 "mask": "0x4", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "bdev": { 00:04:37.937 "mask": "0x8", 00:04:37.937 "tpoint_mask": "0xffffffffffffffff" 00:04:37.937 }, 00:04:37.937 "nvmf_rdma": { 00:04:37.937 "mask": "0x10", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "nvmf_tcp": { 00:04:37.937 "mask": "0x20", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "ftl": { 00:04:37.937 "mask": "0x40", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "blobfs": { 00:04:37.937 "mask": "0x80", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "dsa": { 00:04:37.937 "mask": "0x200", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "thread": { 00:04:37.937 "mask": "0x400", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "nvme_pcie": { 00:04:37.937 "mask": "0x800", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "iaa": { 00:04:37.937 "mask": "0x1000", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "nvme_tcp": { 00:04:37.937 "mask": "0x2000", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 }, 00:04:37.937 "bdev_nvme": { 00:04:37.937 "mask": "0x4000", 00:04:37.937 "tpoint_mask": "0x0" 00:04:37.937 } 00:04:37.937 }' 00:04:37.937 23:38:08 -- rpc/rpc.sh@43 -- # jq length 00:04:38.197 23:38:08 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:38.197 23:38:08 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.197 23:38:08 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.197 23:38:08 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.197 23:38:08 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.197 23:38:08 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.197 23:38:08 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.197 23:38:08 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.197 ************************************ 00:04:38.197 END TEST rpc_trace_cmd_test 00:04:38.197 ************************************ 00:04:38.197 23:38:08 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.197 00:04:38.197 real 0m0.164s 00:04:38.197 user 0m0.127s 00:04:38.197 sys 0m0.027s 00:04:38.197 23:38:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.197 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.197 23:38:08 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.197 23:38:08 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.197 23:38:08 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.197 23:38:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.197 23:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.197 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.197 ************************************ 00:04:38.197 START TEST rpc_daemon_integrity 00:04:38.197 ************************************ 00:04:38.197 23:38:08 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:38.197 23:38:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.197 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.197 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.197 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.197 23:38:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.197 23:38:08 -- rpc/rpc.sh@13 -- # jq length 00:04:38.197 23:38:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.197 23:38:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.197 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.197 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.197 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.198 23:38:08 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.198 23:38:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.198 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.198 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.198 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.198 23:38:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.198 { 00:04:38.198 "name": "Malloc2", 00:04:38.198 "aliases": [ 00:04:38.198 "affad568-d559-4ee6-883f-2beb5eef7423" 00:04:38.198 ], 00:04:38.198 "product_name": "Malloc disk", 00:04:38.198 "block_size": 512, 00:04:38.198 "num_blocks": 16384, 00:04:38.198 "uuid": "affad568-d559-4ee6-883f-2beb5eef7423", 00:04:38.198 "assigned_rate_limits": { 00:04:38.198 "rw_ios_per_sec": 0, 00:04:38.198 "rw_mbytes_per_sec": 0, 00:04:38.198 "r_mbytes_per_sec": 0, 00:04:38.198 "w_mbytes_per_sec": 0 00:04:38.198 }, 00:04:38.198 "claimed": false, 00:04:38.198 "zoned": false, 00:04:38.198 "supported_io_types": { 00:04:38.198 "read": true, 00:04:38.198 "write": true, 00:04:38.198 "unmap": true, 00:04:38.198 "write_zeroes": true, 00:04:38.198 "flush": true, 00:04:38.198 "reset": true, 00:04:38.198 "compare": false, 00:04:38.198 "compare_and_write": false, 00:04:38.198 "abort": true, 00:04:38.198 "nvme_admin": false, 00:04:38.198 "nvme_io": false 00:04:38.198 }, 00:04:38.198 "memory_domains": [ 00:04:38.198 { 00:04:38.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.198 "dma_device_type": 2 00:04:38.198 } 00:04:38.198 ], 00:04:38.198 "driver_specific": {} 00:04:38.198 } 00:04:38.198 ]' 00:04:38.457 23:38:08 -- rpc/rpc.sh@17 -- # jq length 00:04:38.457 23:38:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.457 23:38:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.457 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.457 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.457 [2024-12-13 23:38:08.961197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.457 [2024-12-13 23:38:08.961332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.457 [2024-12-13 23:38:08.961355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:38.457 [2024-12-13 23:38:08.961366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.457 [2024-12-13 23:38:08.963412] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.457 [2024-12-13 23:38:08.963441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.457 Passthru0 00:04:38.457 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.457 23:38:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.457 23:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.457 23:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.457 23:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.457 23:38:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.457 { 00:04:38.457 "name": "Malloc2", 00:04:38.457 "aliases": [ 00:04:38.457 "affad568-d559-4ee6-883f-2beb5eef7423" 00:04:38.457 ], 00:04:38.457 "product_name": "Malloc disk", 00:04:38.457 "block_size": 512, 00:04:38.457 "num_blocks": 16384, 00:04:38.457 "uuid": "affad568-d559-4ee6-883f-2beb5eef7423", 00:04:38.457 "assigned_rate_limits": { 00:04:38.457 "rw_ios_per_sec": 0, 00:04:38.457 "rw_mbytes_per_sec": 0, 00:04:38.457 "r_mbytes_per_sec": 0, 00:04:38.457 "w_mbytes_per_sec": 0 00:04:38.457 }, 00:04:38.457 "claimed": true, 00:04:38.457 "claim_type": "exclusive_write", 00:04:38.457 "zoned": false, 00:04:38.457 "supported_io_types": { 00:04:38.457 "read": true, 00:04:38.457 "write": true, 00:04:38.457 "unmap": true, 00:04:38.457 "write_zeroes": true, 00:04:38.457 "flush": true, 00:04:38.457 "reset": true, 00:04:38.457 "compare": false, 00:04:38.457 "compare_and_write": false, 00:04:38.457 "abort": true, 00:04:38.457 "nvme_admin": false, 00:04:38.457 "nvme_io": false 00:04:38.457 }, 00:04:38.457 "memory_domains": [ 00:04:38.457 { 00:04:38.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.457 "dma_device_type": 2 00:04:38.457 } 00:04:38.457 ], 00:04:38.457 "driver_specific": {} 00:04:38.457 }, 00:04:38.457 { 00:04:38.457 "name": "Passthru0", 00:04:38.457 "aliases": [ 00:04:38.457 "d1ed3508-7306-5e8f-8f66-fb850b137c47" 00:04:38.457 ], 00:04:38.457 "product_name": "passthru", 00:04:38.457 "block_size": 512, 00:04:38.457 "num_blocks": 16384, 00:04:38.457 "uuid": "d1ed3508-7306-5e8f-8f66-fb850b137c47", 00:04:38.457 "assigned_rate_limits": { 00:04:38.457 "rw_ios_per_sec": 0, 00:04:38.457 "rw_mbytes_per_sec": 0, 00:04:38.457 "r_mbytes_per_sec": 0, 00:04:38.457 "w_mbytes_per_sec": 0 00:04:38.457 }, 00:04:38.457 "claimed": false, 00:04:38.457 "zoned": false, 00:04:38.457 "supported_io_types": { 00:04:38.457 "read": true, 00:04:38.457 "write": true, 00:04:38.457 "unmap": true, 00:04:38.457 "write_zeroes": true, 00:04:38.457 "flush": true, 00:04:38.457 "reset": true, 00:04:38.457 "compare": false, 00:04:38.457 "compare_and_write": false, 00:04:38.457 "abort": true, 00:04:38.457 "nvme_admin": false, 00:04:38.457 "nvme_io": false 00:04:38.457 }, 00:04:38.457 "memory_domains": [ 00:04:38.457 { 00:04:38.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.457 "dma_device_type": 2 00:04:38.457 } 00:04:38.457 ], 00:04:38.457 "driver_specific": { 00:04:38.457 "passthru": { 00:04:38.457 "name": "Passthru0", 00:04:38.457 "base_bdev_name": "Malloc2" 00:04:38.457 } 00:04:38.457 } 00:04:38.457 } 00:04:38.457 ]' 00:04:38.457 23:38:08 -- rpc/rpc.sh@21 -- # jq length 00:04:38.457 23:38:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.457 23:38:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.457 23:38:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.457 23:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:38.457 23:38:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.457 23:38:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.457 23:38:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.457 23:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:38.457 23:38:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.457 23:38:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.457 23:38:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.457 23:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:38.457 23:38:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.457 23:38:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.457 23:38:09 -- rpc/rpc.sh@26 -- # jq length 00:04:38.457 ************************************ 00:04:38.457 END TEST rpc_daemon_integrity 00:04:38.457 ************************************ 00:04:38.457 23:38:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.457 00:04:38.457 real 0m0.231s 00:04:38.457 user 0m0.124s 00:04:38.457 sys 0m0.028s 00:04:38.457 23:38:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.457 23:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:38.457 23:38:09 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.457 23:38:09 -- rpc/rpc.sh@84 -- # killprocess 56188 00:04:38.457 23:38:09 -- common/autotest_common.sh@936 -- # '[' -z 56188 ']' 00:04:38.457 23:38:09 -- common/autotest_common.sh@940 -- # kill -0 56188 00:04:38.457 23:38:09 -- common/autotest_common.sh@941 -- # uname 00:04:38.457 23:38:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.457 23:38:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56188 00:04:38.457 killing process with pid 56188 00:04:38.457 23:38:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.457 23:38:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.457 23:38:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56188' 00:04:38.457 23:38:09 -- common/autotest_common.sh@955 -- # kill 56188 00:04:38.457 23:38:09 -- common/autotest_common.sh@960 -- # wait 56188 00:04:40.358 00:04:40.358 real 0m4.169s 00:04:40.358 user 0m4.687s 00:04:40.358 sys 0m0.623s 00:04:40.358 23:38:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.358 ************************************ 00:04:40.358 END TEST rpc 00:04:40.358 ************************************ 00:04:40.358 23:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.358 23:38:10 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.358 23:38:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.358 23:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.358 23:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.358 ************************************ 00:04:40.358 START TEST rpc_client 00:04:40.358 ************************************ 00:04:40.358 23:38:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.358 * Looking for test storage... 00:04:40.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.359 23:38:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:40.359 23:38:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:40.359 23:38:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:40.359 23:38:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:40.359 23:38:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:40.359 23:38:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:40.359 23:38:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:40.359 23:38:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:40.359 23:38:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.359 23:38:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:40.359 23:38:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:40.359 23:38:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:40.359 23:38:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:40.359 23:38:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:40.359 23:38:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:40.359 23:38:10 -- scripts/common.sh@344 -- # : 1 00:04:40.359 23:38:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:40.359 23:38:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.359 23:38:10 -- scripts/common.sh@364 -- # decimal 1 00:04:40.359 23:38:10 -- scripts/common.sh@352 -- # local d=1 00:04:40.359 23:38:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.359 23:38:10 -- scripts/common.sh@354 -- # echo 1 00:04:40.359 23:38:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:40.359 23:38:10 -- scripts/common.sh@365 -- # decimal 2 00:04:40.359 23:38:10 -- scripts/common.sh@352 -- # local d=2 00:04:40.359 23:38:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.359 23:38:10 -- scripts/common.sh@354 -- # echo 2 00:04:40.359 23:38:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:40.359 23:38:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:40.359 23:38:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:40.359 23:38:10 -- scripts/common.sh@367 -- # return 0 00:04:40.359 23:38:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.359 OK 00:04:40.359 23:38:10 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.359 00:04:40.359 real 0m0.180s 00:04:40.359 user 0m0.095s 00:04:40.359 sys 0m0.090s 00:04:40.359 23:38:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.359 23:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 ************************************ 00:04:40.359 END TEST rpc_client 00:04:40.359 ************************************ 00:04:40.359 23:38:10 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.359 23:38:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.359 23:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 ************************************ 00:04:40.359 START TEST json_config 00:04:40.359 ************************************ 00:04:40.359 23:38:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.359 23:38:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:40.359 23:38:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:40.359 23:38:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:40.359 23:38:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:40.359 23:38:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:40.359 23:38:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:40.359 23:38:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:40.359 23:38:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:40.359 23:38:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.359 23:38:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:40.359 23:38:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:40.359 23:38:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:40.359 23:38:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:40.359 23:38:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:40.359 23:38:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:40.359 23:38:10 -- scripts/common.sh@344 -- # : 1 00:04:40.359 23:38:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:40.359 23:38:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.359 23:38:10 -- scripts/common.sh@364 -- # decimal 1 00:04:40.359 23:38:10 -- scripts/common.sh@352 -- # local d=1 00:04:40.359 23:38:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.359 23:38:10 -- scripts/common.sh@354 -- # echo 1 00:04:40.359 23:38:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:40.359 23:38:10 -- scripts/common.sh@365 -- # decimal 2 00:04:40.359 23:38:10 -- scripts/common.sh@352 -- # local d=2 00:04:40.359 23:38:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.359 23:38:10 -- scripts/common.sh@354 -- # echo 2 00:04:40.359 23:38:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:40.359 23:38:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:40.359 23:38:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:40.359 23:38:10 -- scripts/common.sh@367 -- # return 0 00:04:40.359 23:38:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.359 --rc genhtml_branch_coverage=1 00:04:40.359 --rc genhtml_function_coverage=1 00:04:40.359 --rc genhtml_legend=1 00:04:40.359 --rc geninfo_all_blocks=1 00:04:40.359 --rc geninfo_unexecuted_blocks=1 00:04:40.359 00:04:40.359 ' 00:04:40.359 23:38:10 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.359 23:38:10 -- nvmf/common.sh@7 -- # uname -s 00:04:40.359 23:38:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.359 23:38:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.359 23:38:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.359 23:38:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.359 23:38:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.359 23:38:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.359 23:38:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.359 23:38:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.359 23:38:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.359 23:38:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.359 23:38:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf00b051-453e-4584-8b01-f6b84500e948 00:04:40.359 23:38:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=bf00b051-453e-4584-8b01-f6b84500e948 00:04:40.359 23:38:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.359 23:38:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.359 23:38:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.359 23:38:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.359 23:38:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.359 23:38:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.359 23:38:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.360 23:38:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.360 23:38:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.360 23:38:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.360 23:38:10 -- paths/export.sh@5 -- # export PATH 00:04:40.360 23:38:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.360 23:38:10 -- nvmf/common.sh@46 -- # : 0 00:04:40.360 23:38:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:40.360 23:38:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:40.360 23:38:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:40.360 23:38:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.360 23:38:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.360 23:38:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:40.360 23:38:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:40.360 23:38:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:40.360 23:38:10 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:40.360 23:38:10 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:40.360 23:38:10 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:40.360 23:38:10 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.360 23:38:10 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:40.360 WARNING: No tests are enabled so not running JSON configuration tests 00:04:40.360 23:38:10 -- json_config/json_config.sh@27 -- # exit 0 00:04:40.360 00:04:40.360 real 0m0.141s 00:04:40.360 user 0m0.088s 00:04:40.360 sys 0m0.052s 00:04:40.360 23:38:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.360 23:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:40.360 ************************************ 00:04:40.360 END TEST json_config 00:04:40.360 ************************************ 00:04:40.360 23:38:11 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:40.360 23:38:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.360 23:38:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.360 23:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:40.360 ************************************ 00:04:40.360 START TEST json_config_extra_key 00:04:40.360 ************************************ 00:04:40.360 23:38:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:40.360 23:38:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:40.360 23:38:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:40.360 23:38:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:40.619 23:38:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:40.619 23:38:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:40.619 23:38:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:40.619 23:38:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:40.619 23:38:11 -- scripts/common.sh@335 -- # IFS=.-: 00:04:40.619 23:38:11 -- scripts/common.sh@335 -- # read -ra ver1 00:04:40.619 23:38:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.619 23:38:11 -- scripts/common.sh@336 -- # read -ra ver2 00:04:40.619 23:38:11 -- scripts/common.sh@337 -- # local 'op=<' 00:04:40.619 23:38:11 -- scripts/common.sh@339 -- # ver1_l=2 00:04:40.619 23:38:11 -- scripts/common.sh@340 -- # ver2_l=1 00:04:40.619 23:38:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:40.619 23:38:11 -- scripts/common.sh@343 -- # case "$op" in 00:04:40.619 23:38:11 -- scripts/common.sh@344 -- # : 1 00:04:40.619 23:38:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:40.619 23:38:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.619 23:38:11 -- scripts/common.sh@364 -- # decimal 1 00:04:40.619 23:38:11 -- scripts/common.sh@352 -- # local d=1 00:04:40.619 23:38:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.619 23:38:11 -- scripts/common.sh@354 -- # echo 1 00:04:40.619 23:38:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:40.619 23:38:11 -- scripts/common.sh@365 -- # decimal 2 00:04:40.619 23:38:11 -- scripts/common.sh@352 -- # local d=2 00:04:40.619 23:38:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.619 23:38:11 -- scripts/common.sh@354 -- # echo 2 00:04:40.619 23:38:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:40.619 23:38:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:40.619 23:38:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:40.619 23:38:11 -- scripts/common.sh@367 -- # return 0 00:04:40.619 23:38:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.619 23:38:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:40.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.619 --rc genhtml_branch_coverage=1 00:04:40.619 --rc genhtml_function_coverage=1 00:04:40.619 --rc genhtml_legend=1 00:04:40.619 --rc geninfo_all_blocks=1 00:04:40.619 --rc geninfo_unexecuted_blocks=1 00:04:40.619 00:04:40.619 ' 00:04:40.619 23:38:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:40.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.619 --rc genhtml_branch_coverage=1 00:04:40.619 --rc genhtml_function_coverage=1 00:04:40.619 --rc genhtml_legend=1 00:04:40.619 --rc geninfo_all_blocks=1 00:04:40.619 --rc geninfo_unexecuted_blocks=1 00:04:40.619 00:04:40.619 ' 00:04:40.619 23:38:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:40.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.619 --rc genhtml_branch_coverage=1 00:04:40.619 --rc genhtml_function_coverage=1 00:04:40.619 --rc genhtml_legend=1 00:04:40.619 --rc geninfo_all_blocks=1 00:04:40.619 --rc geninfo_unexecuted_blocks=1 00:04:40.619 00:04:40.619 ' 00:04:40.619 23:38:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:40.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.619 --rc genhtml_branch_coverage=1 00:04:40.619 --rc genhtml_function_coverage=1 00:04:40.619 --rc genhtml_legend=1 00:04:40.619 --rc geninfo_all_blocks=1 00:04:40.619 --rc geninfo_unexecuted_blocks=1 00:04:40.619 00:04:40.619 ' 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.619 23:38:11 -- nvmf/common.sh@7 -- # uname -s 00:04:40.619 23:38:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.619 23:38:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.619 23:38:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.619 23:38:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.619 23:38:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.619 23:38:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.619 23:38:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.619 23:38:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.619 23:38:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.619 23:38:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.619 23:38:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf00b051-453e-4584-8b01-f6b84500e948 00:04:40.619 23:38:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=bf00b051-453e-4584-8b01-f6b84500e948 00:04:40.619 23:38:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.619 23:38:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.619 23:38:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.619 23:38:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.619 23:38:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.619 23:38:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.619 23:38:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.619 23:38:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.619 23:38:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.619 23:38:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.619 23:38:11 -- paths/export.sh@5 -- # export PATH 00:04:40.619 23:38:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.619 23:38:11 -- nvmf/common.sh@46 -- # : 0 00:04:40.619 23:38:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:40.619 23:38:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:40.619 23:38:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:40.619 23:38:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.619 23:38:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.619 23:38:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:40.619 23:38:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:40.619 23:38:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:40.619 INFO: launching applications... 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56506 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:40.619 Waiting for target to run... 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56506 /var/tmp/spdk_tgt.sock 00:04:40.619 23:38:11 -- common/autotest_common.sh@829 -- # '[' -z 56506 ']' 00:04:40.619 23:38:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.619 23:38:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.619 23:38:11 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:40.619 23:38:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.619 23:38:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.619 23:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:40.619 [2024-12-13 23:38:11.233242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:40.619 [2024-12-13 23:38:11.233514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56506 ] 00:04:40.878 [2024-12-13 23:38:11.545660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.136 [2024-12-13 23:38:11.681466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.136 [2024-12-13 23:38:11.681750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.394 23:38:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.394 23:38:12 -- common/autotest_common.sh@862 -- # return 0 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:41.394 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:41.394 INFO: shutting down applications... 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56506 ]] 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56506 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56506 00:04:41.394 23:38:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:41.960 23:38:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:41.960 23:38:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:41.960 23:38:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56506 00:04:41.960 23:38:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:42.526 23:38:13 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:42.526 23:38:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:42.526 23:38:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56506 00:04:42.526 23:38:13 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:43.097 SPDK target shutdown done 00:04:43.097 Success 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56506 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:43.097 23:38:13 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:43.097 00:04:43.097 real 0m2.543s 00:04:43.097 user 0m2.275s 00:04:43.097 sys 0m0.391s 00:04:43.097 ************************************ 00:04:43.097 END TEST json_config_extra_key 00:04:43.097 ************************************ 00:04:43.097 23:38:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.097 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.097 23:38:13 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.097 23:38:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.097 23:38:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.097 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.097 ************************************ 00:04:43.097 START TEST alias_rpc 00:04:43.097 ************************************ 00:04:43.097 23:38:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.097 * Looking for test storage... 00:04:43.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:43.097 23:38:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.097 23:38:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.097 23:38:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.097 23:38:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.097 23:38:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.097 23:38:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.097 23:38:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.097 23:38:13 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.097 23:38:13 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.097 23:38:13 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.097 23:38:13 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.097 23:38:13 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.097 23:38:13 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.097 23:38:13 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.097 23:38:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.097 23:38:13 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.097 23:38:13 -- scripts/common.sh@344 -- # : 1 00:04:43.097 23:38:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.097 23:38:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.097 23:38:13 -- scripts/common.sh@364 -- # decimal 1 00:04:43.097 23:38:13 -- scripts/common.sh@352 -- # local d=1 00:04:43.097 23:38:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.097 23:38:13 -- scripts/common.sh@354 -- # echo 1 00:04:43.097 23:38:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.097 23:38:13 -- scripts/common.sh@365 -- # decimal 2 00:04:43.097 23:38:13 -- scripts/common.sh@352 -- # local d=2 00:04:43.097 23:38:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.097 23:38:13 -- scripts/common.sh@354 -- # echo 2 00:04:43.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.097 23:38:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.097 23:38:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.097 23:38:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.097 23:38:13 -- scripts/common.sh@367 -- # return 0 00:04:43.097 23:38:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.097 23:38:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.097 --rc genhtml_branch_coverage=1 00:04:43.097 --rc genhtml_function_coverage=1 00:04:43.097 --rc genhtml_legend=1 00:04:43.097 --rc geninfo_all_blocks=1 00:04:43.097 --rc geninfo_unexecuted_blocks=1 00:04:43.097 00:04:43.097 ' 00:04:43.097 23:38:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.097 --rc genhtml_branch_coverage=1 00:04:43.097 --rc genhtml_function_coverage=1 00:04:43.097 --rc genhtml_legend=1 00:04:43.097 --rc geninfo_all_blocks=1 00:04:43.097 --rc geninfo_unexecuted_blocks=1 00:04:43.097 00:04:43.097 ' 00:04:43.097 23:38:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.097 --rc genhtml_branch_coverage=1 00:04:43.097 --rc genhtml_function_coverage=1 00:04:43.097 --rc genhtml_legend=1 00:04:43.097 --rc geninfo_all_blocks=1 00:04:43.097 --rc geninfo_unexecuted_blocks=1 00:04:43.097 00:04:43.097 ' 00:04:43.097 23:38:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.097 --rc genhtml_branch_coverage=1 00:04:43.097 --rc genhtml_function_coverage=1 00:04:43.097 --rc genhtml_legend=1 00:04:43.097 --rc geninfo_all_blocks=1 00:04:43.097 --rc geninfo_unexecuted_blocks=1 00:04:43.097 00:04:43.097 ' 00:04:43.097 23:38:13 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.097 23:38:13 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56586 00:04:43.097 23:38:13 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56586 00:04:43.097 23:38:13 -- common/autotest_common.sh@829 -- # '[' -z 56586 ']' 00:04:43.097 23:38:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.097 23:38:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.097 23:38:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.097 23:38:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.097 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.097 23:38:13 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.356 [2024-12-13 23:38:13.834436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.356 [2024-12-13 23:38:13.835003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56586 ] 00:04:43.356 [2024-12-13 23:38:13.980472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.614 [2024-12-13 23:38:14.162797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.614 [2024-12-13 23:38:14.162998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.023 23:38:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.023 23:38:15 -- common/autotest_common.sh@862 -- # return 0 00:04:45.023 23:38:15 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:45.023 23:38:15 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56586 00:04:45.023 23:38:15 -- common/autotest_common.sh@936 -- # '[' -z 56586 ']' 00:04:45.023 23:38:15 -- common/autotest_common.sh@940 -- # kill -0 56586 00:04:45.023 23:38:15 -- common/autotest_common.sh@941 -- # uname 00:04:45.023 23:38:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.023 23:38:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56586 00:04:45.023 killing process with pid 56586 00:04:45.023 23:38:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.023 23:38:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.023 23:38:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56586' 00:04:45.023 23:38:15 -- common/autotest_common.sh@955 -- # kill 56586 00:04:45.023 23:38:15 -- common/autotest_common.sh@960 -- # wait 56586 00:04:46.398 ************************************ 00:04:46.398 END TEST alias_rpc 00:04:46.398 ************************************ 00:04:46.398 00:04:46.398 real 0m3.394s 00:04:46.398 user 0m3.625s 00:04:46.398 sys 0m0.396s 00:04:46.398 23:38:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.398 23:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.398 23:38:17 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:46.398 23:38:17 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.398 23:38:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.398 23:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.398 23:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.398 ************************************ 00:04:46.398 START TEST spdkcli_tcp 00:04:46.398 ************************************ 00:04:46.398 23:38:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.398 * Looking for test storage... 00:04:46.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:46.398 23:38:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.398 23:38:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.398 23:38:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.657 23:38:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.657 23:38:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.657 23:38:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.657 23:38:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.657 23:38:17 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.657 23:38:17 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.657 23:38:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.657 23:38:17 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.657 23:38:17 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.657 23:38:17 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.657 23:38:17 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.657 23:38:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.657 23:38:17 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.657 23:38:17 -- scripts/common.sh@344 -- # : 1 00:04:46.657 23:38:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.657 23:38:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.657 23:38:17 -- scripts/common.sh@364 -- # decimal 1 00:04:46.657 23:38:17 -- scripts/common.sh@352 -- # local d=1 00:04:46.657 23:38:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.657 23:38:17 -- scripts/common.sh@354 -- # echo 1 00:04:46.657 23:38:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.657 23:38:17 -- scripts/common.sh@365 -- # decimal 2 00:04:46.657 23:38:17 -- scripts/common.sh@352 -- # local d=2 00:04:46.657 23:38:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.657 23:38:17 -- scripts/common.sh@354 -- # echo 2 00:04:46.657 23:38:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.657 23:38:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.657 23:38:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.657 23:38:17 -- scripts/common.sh@367 -- # return 0 00:04:46.657 23:38:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.657 23:38:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.657 --rc genhtml_branch_coverage=1 00:04:46.657 --rc genhtml_function_coverage=1 00:04:46.657 --rc genhtml_legend=1 00:04:46.657 --rc geninfo_all_blocks=1 00:04:46.657 --rc geninfo_unexecuted_blocks=1 00:04:46.657 00:04:46.657 ' 00:04:46.657 23:38:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.657 --rc genhtml_branch_coverage=1 00:04:46.657 --rc genhtml_function_coverage=1 00:04:46.657 --rc genhtml_legend=1 00:04:46.657 --rc geninfo_all_blocks=1 00:04:46.657 --rc geninfo_unexecuted_blocks=1 00:04:46.657 00:04:46.657 ' 00:04:46.657 23:38:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.657 --rc genhtml_branch_coverage=1 00:04:46.657 --rc genhtml_function_coverage=1 00:04:46.657 --rc genhtml_legend=1 00:04:46.657 --rc geninfo_all_blocks=1 00:04:46.657 --rc geninfo_unexecuted_blocks=1 00:04:46.657 00:04:46.657 ' 00:04:46.657 23:38:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.657 --rc genhtml_branch_coverage=1 00:04:46.657 --rc genhtml_function_coverage=1 00:04:46.657 --rc genhtml_legend=1 00:04:46.657 --rc geninfo_all_blocks=1 00:04:46.657 --rc geninfo_unexecuted_blocks=1 00:04:46.657 00:04:46.657 ' 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:46.657 23:38:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:46.657 23:38:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:46.657 23:38:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.657 23:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56694 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@27 -- # waitforlisten 56694 00:04:46.657 23:38:17 -- common/autotest_common.sh@829 -- # '[' -z 56694 ']' 00:04:46.657 23:38:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.657 23:38:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.657 23:38:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.657 23:38:17 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:46.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.657 23:38:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.657 23:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.657 [2024-12-13 23:38:17.269288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.657 [2024-12-13 23:38:17.269576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56694 ] 00:04:46.915 [2024-12-13 23:38:17.420749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.915 [2024-12-13 23:38:17.596075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.915 [2024-12-13 23:38:17.596500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.915 [2024-12-13 23:38:17.596548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.291 23:38:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.291 23:38:18 -- common/autotest_common.sh@862 -- # return 0 00:04:48.291 23:38:18 -- spdkcli/tcp.sh@31 -- # socat_pid=56713 00:04:48.291 23:38:18 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.291 23:38:18 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.291 [ 00:04:48.291 "bdev_malloc_delete", 00:04:48.291 "bdev_malloc_create", 00:04:48.291 "bdev_null_resize", 00:04:48.291 "bdev_null_delete", 00:04:48.291 "bdev_null_create", 00:04:48.291 "bdev_nvme_cuse_unregister", 00:04:48.291 "bdev_nvme_cuse_register", 00:04:48.291 "bdev_opal_new_user", 00:04:48.291 "bdev_opal_set_lock_state", 00:04:48.291 "bdev_opal_delete", 00:04:48.291 "bdev_opal_get_info", 00:04:48.291 "bdev_opal_create", 00:04:48.291 "bdev_nvme_opal_revert", 00:04:48.291 "bdev_nvme_opal_init", 00:04:48.291 "bdev_nvme_send_cmd", 00:04:48.291 "bdev_nvme_get_path_iostat", 00:04:48.291 "bdev_nvme_get_mdns_discovery_info", 00:04:48.291 "bdev_nvme_stop_mdns_discovery", 00:04:48.291 "bdev_nvme_start_mdns_discovery", 00:04:48.291 "bdev_nvme_set_multipath_policy", 00:04:48.291 "bdev_nvme_set_preferred_path", 00:04:48.291 "bdev_nvme_get_io_paths", 00:04:48.291 "bdev_nvme_remove_error_injection", 00:04:48.291 "bdev_nvme_add_error_injection", 00:04:48.291 "bdev_nvme_get_discovery_info", 00:04:48.291 "bdev_nvme_stop_discovery", 00:04:48.291 "bdev_nvme_start_discovery", 00:04:48.291 "bdev_nvme_get_controller_health_info", 00:04:48.291 "bdev_nvme_disable_controller", 00:04:48.291 "bdev_nvme_enable_controller", 00:04:48.291 "bdev_nvme_reset_controller", 00:04:48.291 "bdev_nvme_get_transport_statistics", 00:04:48.291 "bdev_nvme_apply_firmware", 00:04:48.291 "bdev_nvme_detach_controller", 00:04:48.291 "bdev_nvme_get_controllers", 00:04:48.291 "bdev_nvme_attach_controller", 00:04:48.291 "bdev_nvme_set_hotplug", 00:04:48.291 "bdev_nvme_set_options", 00:04:48.291 "bdev_passthru_delete", 00:04:48.291 "bdev_passthru_create", 00:04:48.291 "bdev_lvol_grow_lvstore", 00:04:48.291 "bdev_lvol_get_lvols", 00:04:48.291 "bdev_lvol_get_lvstores", 00:04:48.291 "bdev_lvol_delete", 00:04:48.291 "bdev_lvol_set_read_only", 00:04:48.291 "bdev_lvol_resize", 00:04:48.291 "bdev_lvol_decouple_parent", 00:04:48.291 "bdev_lvol_inflate", 00:04:48.291 "bdev_lvol_rename", 00:04:48.291 "bdev_lvol_clone_bdev", 00:04:48.291 "bdev_lvol_clone", 00:04:48.291 "bdev_lvol_snapshot", 00:04:48.291 "bdev_lvol_create", 00:04:48.291 "bdev_lvol_delete_lvstore", 00:04:48.291 "bdev_lvol_rename_lvstore", 00:04:48.291 "bdev_lvol_create_lvstore", 00:04:48.291 "bdev_raid_set_options", 00:04:48.291 "bdev_raid_remove_base_bdev", 00:04:48.291 "bdev_raid_add_base_bdev", 00:04:48.291 "bdev_raid_delete", 00:04:48.291 "bdev_raid_create", 00:04:48.291 "bdev_raid_get_bdevs", 00:04:48.291 "bdev_error_inject_error", 00:04:48.291 "bdev_error_delete", 00:04:48.291 "bdev_error_create", 00:04:48.291 "bdev_split_delete", 00:04:48.291 "bdev_split_create", 00:04:48.291 "bdev_delay_delete", 00:04:48.291 "bdev_delay_create", 00:04:48.291 "bdev_delay_update_latency", 00:04:48.291 "bdev_zone_block_delete", 00:04:48.291 "bdev_zone_block_create", 00:04:48.291 "blobfs_create", 00:04:48.291 "blobfs_detect", 00:04:48.291 "blobfs_set_cache_size", 00:04:48.291 "bdev_xnvme_delete", 00:04:48.291 "bdev_xnvme_create", 00:04:48.291 "bdev_aio_delete", 00:04:48.291 "bdev_aio_rescan", 00:04:48.291 "bdev_aio_create", 00:04:48.291 "bdev_ftl_set_property", 00:04:48.291 "bdev_ftl_get_properties", 00:04:48.291 "bdev_ftl_get_stats", 00:04:48.291 "bdev_ftl_unmap", 00:04:48.291 "bdev_ftl_unload", 00:04:48.291 "bdev_ftl_delete", 00:04:48.291 "bdev_ftl_load", 00:04:48.291 "bdev_ftl_create", 00:04:48.291 "bdev_virtio_attach_controller", 00:04:48.291 "bdev_virtio_scsi_get_devices", 00:04:48.291 "bdev_virtio_detach_controller", 00:04:48.291 "bdev_virtio_blk_set_hotplug", 00:04:48.291 "bdev_iscsi_delete", 00:04:48.291 "bdev_iscsi_create", 00:04:48.291 "bdev_iscsi_set_options", 00:04:48.291 "accel_error_inject_error", 00:04:48.291 "ioat_scan_accel_module", 00:04:48.291 "dsa_scan_accel_module", 00:04:48.291 "iaa_scan_accel_module", 00:04:48.291 "iscsi_set_options", 00:04:48.291 "iscsi_get_auth_groups", 00:04:48.291 "iscsi_auth_group_remove_secret", 00:04:48.291 "iscsi_auth_group_add_secret", 00:04:48.291 "iscsi_delete_auth_group", 00:04:48.291 "iscsi_create_auth_group", 00:04:48.291 "iscsi_set_discovery_auth", 00:04:48.291 "iscsi_get_options", 00:04:48.291 "iscsi_target_node_request_logout", 00:04:48.291 "iscsi_target_node_set_redirect", 00:04:48.291 "iscsi_target_node_set_auth", 00:04:48.291 "iscsi_target_node_add_lun", 00:04:48.291 "iscsi_get_connections", 00:04:48.291 "iscsi_portal_group_set_auth", 00:04:48.291 "iscsi_start_portal_group", 00:04:48.291 "iscsi_delete_portal_group", 00:04:48.291 "iscsi_create_portal_group", 00:04:48.291 "iscsi_get_portal_groups", 00:04:48.291 "iscsi_delete_target_node", 00:04:48.291 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.291 "iscsi_target_node_add_pg_ig_maps", 00:04:48.291 "iscsi_create_target_node", 00:04:48.291 "iscsi_get_target_nodes", 00:04:48.291 "iscsi_delete_initiator_group", 00:04:48.291 "iscsi_initiator_group_remove_initiators", 00:04:48.291 "iscsi_initiator_group_add_initiators", 00:04:48.291 "iscsi_create_initiator_group", 00:04:48.291 "iscsi_get_initiator_groups", 00:04:48.291 "nvmf_set_crdt", 00:04:48.291 "nvmf_set_config", 00:04:48.291 "nvmf_set_max_subsystems", 00:04:48.291 "nvmf_subsystem_get_listeners", 00:04:48.291 "nvmf_subsystem_get_qpairs", 00:04:48.291 "nvmf_subsystem_get_controllers", 00:04:48.291 "nvmf_get_stats", 00:04:48.291 "nvmf_get_transports", 00:04:48.291 "nvmf_create_transport", 00:04:48.291 "nvmf_get_targets", 00:04:48.291 "nvmf_delete_target", 00:04:48.291 "nvmf_create_target", 00:04:48.291 "nvmf_subsystem_allow_any_host", 00:04:48.291 "nvmf_subsystem_remove_host", 00:04:48.291 "nvmf_subsystem_add_host", 00:04:48.291 "nvmf_subsystem_remove_ns", 00:04:48.291 "nvmf_subsystem_add_ns", 00:04:48.291 "nvmf_subsystem_listener_set_ana_state", 00:04:48.291 "nvmf_discovery_get_referrals", 00:04:48.291 "nvmf_discovery_remove_referral", 00:04:48.291 "nvmf_discovery_add_referral", 00:04:48.291 "nvmf_subsystem_remove_listener", 00:04:48.291 "nvmf_subsystem_add_listener", 00:04:48.291 "nvmf_delete_subsystem", 00:04:48.291 "nvmf_create_subsystem", 00:04:48.291 "nvmf_get_subsystems", 00:04:48.291 "env_dpdk_get_mem_stats", 00:04:48.291 "nbd_get_disks", 00:04:48.291 "nbd_stop_disk", 00:04:48.291 "nbd_start_disk", 00:04:48.291 "ublk_recover_disk", 00:04:48.291 "ublk_get_disks", 00:04:48.291 "ublk_stop_disk", 00:04:48.291 "ublk_start_disk", 00:04:48.291 "ublk_destroy_target", 00:04:48.291 "ublk_create_target", 00:04:48.291 "virtio_blk_create_transport", 00:04:48.291 "virtio_blk_get_transports", 00:04:48.291 "vhost_controller_set_coalescing", 00:04:48.291 "vhost_get_controllers", 00:04:48.291 "vhost_delete_controller", 00:04:48.291 "vhost_create_blk_controller", 00:04:48.291 "vhost_scsi_controller_remove_target", 00:04:48.291 "vhost_scsi_controller_add_target", 00:04:48.291 "vhost_start_scsi_controller", 00:04:48.291 "vhost_create_scsi_controller", 00:04:48.291 "thread_set_cpumask", 00:04:48.291 "framework_get_scheduler", 00:04:48.291 "framework_set_scheduler", 00:04:48.291 "framework_get_reactors", 00:04:48.291 "thread_get_io_channels", 00:04:48.291 "thread_get_pollers", 00:04:48.291 "thread_get_stats", 00:04:48.291 "framework_monitor_context_switch", 00:04:48.291 "spdk_kill_instance", 00:04:48.291 "log_enable_timestamps", 00:04:48.291 "log_get_flags", 00:04:48.291 "log_clear_flag", 00:04:48.291 "log_set_flag", 00:04:48.291 "log_get_level", 00:04:48.291 "log_set_level", 00:04:48.291 "log_get_print_level", 00:04:48.291 "log_set_print_level", 00:04:48.291 "framework_enable_cpumask_locks", 00:04:48.291 "framework_disable_cpumask_locks", 00:04:48.291 "framework_wait_init", 00:04:48.292 "framework_start_init", 00:04:48.292 "scsi_get_devices", 00:04:48.292 "bdev_get_histogram", 00:04:48.292 "bdev_enable_histogram", 00:04:48.292 "bdev_set_qos_limit", 00:04:48.292 "bdev_set_qd_sampling_period", 00:04:48.292 "bdev_get_bdevs", 00:04:48.292 "bdev_reset_iostat", 00:04:48.292 "bdev_get_iostat", 00:04:48.292 "bdev_examine", 00:04:48.292 "bdev_wait_for_examine", 00:04:48.292 "bdev_set_options", 00:04:48.292 "notify_get_notifications", 00:04:48.292 "notify_get_types", 00:04:48.292 "accel_get_stats", 00:04:48.292 "accel_set_options", 00:04:48.292 "accel_set_driver", 00:04:48.292 "accel_crypto_key_destroy", 00:04:48.292 "accel_crypto_keys_get", 00:04:48.292 "accel_crypto_key_create", 00:04:48.292 "accel_assign_opc", 00:04:48.292 "accel_get_module_info", 00:04:48.292 "accel_get_opc_assignments", 00:04:48.292 "vmd_rescan", 00:04:48.292 "vmd_remove_device", 00:04:48.292 "vmd_enable", 00:04:48.292 "sock_set_default_impl", 00:04:48.292 "sock_impl_set_options", 00:04:48.292 "sock_impl_get_options", 00:04:48.292 "iobuf_get_stats", 00:04:48.292 "iobuf_set_options", 00:04:48.292 "framework_get_pci_devices", 00:04:48.292 "framework_get_config", 00:04:48.292 "framework_get_subsystems", 00:04:48.292 "trace_get_info", 00:04:48.292 "trace_get_tpoint_group_mask", 00:04:48.292 "trace_disable_tpoint_group", 00:04:48.292 "trace_enable_tpoint_group", 00:04:48.292 "trace_clear_tpoint_mask", 00:04:48.292 "trace_set_tpoint_mask", 00:04:48.292 "spdk_get_version", 00:04:48.292 "rpc_get_methods" 00:04:48.292 ] 00:04:48.292 23:38:18 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.292 23:38:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.292 23:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.292 23:38:18 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.292 23:38:18 -- spdkcli/tcp.sh@38 -- # killprocess 56694 00:04:48.292 23:38:18 -- common/autotest_common.sh@936 -- # '[' -z 56694 ']' 00:04:48.292 23:38:18 -- common/autotest_common.sh@940 -- # kill -0 56694 00:04:48.292 23:38:18 -- common/autotest_common.sh@941 -- # uname 00:04:48.292 23:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.292 23:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56694 00:04:48.292 killing process with pid 56694 00:04:48.292 23:38:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.292 23:38:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.292 23:38:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56694' 00:04:48.292 23:38:19 -- common/autotest_common.sh@955 -- # kill 56694 00:04:48.292 23:38:19 -- common/autotest_common.sh@960 -- # wait 56694 00:04:49.667 ************************************ 00:04:49.667 END TEST spdkcli_tcp 00:04:49.667 ************************************ 00:04:49.667 00:04:49.667 real 0m3.285s 00:04:49.667 user 0m6.042s 00:04:49.667 sys 0m0.427s 00:04:49.667 23:38:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.667 23:38:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 23:38:20 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.667 23:38:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.667 23:38:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.667 23:38:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 ************************************ 00:04:49.667 START TEST dpdk_mem_utility 00:04:49.667 ************************************ 00:04:49.667 23:38:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.926 * Looking for test storage... 00:04:49.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:49.926 23:38:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.926 23:38:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.926 23:38:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.926 23:38:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.926 23:38:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.926 23:38:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.926 23:38:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.926 23:38:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.926 23:38:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.926 23:38:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.926 23:38:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.926 23:38:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.926 23:38:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.926 23:38:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.926 23:38:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.926 23:38:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.926 23:38:20 -- scripts/common.sh@344 -- # : 1 00:04:49.926 23:38:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.926 23:38:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.926 23:38:20 -- scripts/common.sh@364 -- # decimal 1 00:04:49.926 23:38:20 -- scripts/common.sh@352 -- # local d=1 00:04:49.926 23:38:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.926 23:38:20 -- scripts/common.sh@354 -- # echo 1 00:04:49.926 23:38:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.926 23:38:20 -- scripts/common.sh@365 -- # decimal 2 00:04:49.926 23:38:20 -- scripts/common.sh@352 -- # local d=2 00:04:49.926 23:38:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.926 23:38:20 -- scripts/common.sh@354 -- # echo 2 00:04:49.926 23:38:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.926 23:38:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.926 23:38:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.926 23:38:20 -- scripts/common.sh@367 -- # return 0 00:04:49.926 23:38:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.926 23:38:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.926 --rc genhtml_branch_coverage=1 00:04:49.926 --rc genhtml_function_coverage=1 00:04:49.926 --rc genhtml_legend=1 00:04:49.926 --rc geninfo_all_blocks=1 00:04:49.926 --rc geninfo_unexecuted_blocks=1 00:04:49.926 00:04:49.926 ' 00:04:49.926 23:38:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.926 --rc genhtml_branch_coverage=1 00:04:49.926 --rc genhtml_function_coverage=1 00:04:49.926 --rc genhtml_legend=1 00:04:49.926 --rc geninfo_all_blocks=1 00:04:49.926 --rc geninfo_unexecuted_blocks=1 00:04:49.926 00:04:49.926 ' 00:04:49.926 23:38:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.926 --rc genhtml_branch_coverage=1 00:04:49.926 --rc genhtml_function_coverage=1 00:04:49.926 --rc genhtml_legend=1 00:04:49.926 --rc geninfo_all_blocks=1 00:04:49.926 --rc geninfo_unexecuted_blocks=1 00:04:49.926 00:04:49.926 ' 00:04:49.926 23:38:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.926 --rc genhtml_branch_coverage=1 00:04:49.926 --rc genhtml_function_coverage=1 00:04:49.926 --rc genhtml_legend=1 00:04:49.926 --rc geninfo_all_blocks=1 00:04:49.926 --rc geninfo_unexecuted_blocks=1 00:04:49.926 00:04:49.926 ' 00:04:49.926 23:38:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.926 23:38:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56806 00:04:49.926 23:38:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56806 00:04:49.926 23:38:20 -- common/autotest_common.sh@829 -- # '[' -z 56806 ']' 00:04:49.926 23:38:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.926 23:38:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.926 23:38:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.926 23:38:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.926 23:38:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.926 23:38:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.926 [2024-12-13 23:38:20.598640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.926 [2024-12-13 23:38:20.599150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56806 ] 00:04:50.185 [2024-12-13 23:38:20.745272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.185 [2024-12-13 23:38:20.893432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.185 [2024-12-13 23:38:20.893741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.751 23:38:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.751 23:38:21 -- common/autotest_common.sh@862 -- # return 0 00:04:50.751 23:38:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.751 23:38:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.751 23:38:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.751 23:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.752 { 00:04:50.752 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.752 } 00:04:50.752 23:38:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.752 23:38:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.752 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:50.752 1 heaps totaling size 820.000000 MiB 00:04:50.752 size: 820.000000 MiB heap id: 0 00:04:50.752 end heaps---------- 00:04:50.752 8 mempools totaling size 598.116089 MiB 00:04:50.752 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.752 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.752 size: 84.521057 MiB name: bdev_io_56806 00:04:50.752 size: 51.011292 MiB name: evtpool_56806 00:04:50.752 size: 50.003479 MiB name: msgpool_56806 00:04:50.752 size: 21.763794 MiB name: PDU_Pool 00:04:50.752 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.752 size: 0.026123 MiB name: Session_Pool 00:04:50.752 end mempools------- 00:04:50.752 6 memzones totaling size 4.142822 MiB 00:04:50.752 size: 1.000366 MiB name: RG_ring_0_56806 00:04:50.752 size: 1.000366 MiB name: RG_ring_1_56806 00:04:50.752 size: 1.000366 MiB name: RG_ring_4_56806 00:04:50.752 size: 1.000366 MiB name: RG_ring_5_56806 00:04:50.752 size: 0.125366 MiB name: RG_ring_2_56806 00:04:50.752 size: 0.015991 MiB name: RG_ring_3_56806 00:04:50.752 end memzones------- 00:04:50.752 23:38:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.011 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:04:51.011 list of free elements. size: 18.451294 MiB 00:04:51.011 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:51.011 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:51.011 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:51.011 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:51.011 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:51.011 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:51.011 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:51.011 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:51.011 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:51.011 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:51.011 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:51.011 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:51.011 element at address: 0x20001b000000 with size: 0.564880 MiB 00:04:51.011 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:51.011 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:51.011 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:51.011 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:51.011 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:51.011 list of standard malloc elements. size: 199.284302 MiB 00:04:51.011 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:51.011 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:51.011 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:51.011 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:51.011 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:51.011 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:51.011 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:51.011 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:51.011 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:51.011 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:51.011 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:51.011 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:51.011 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:51.011 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:51.011 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:51.012 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:51.013 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:51.013 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:51.013 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:51.013 list of memzone associated elements. size: 602.264404 MiB 00:04:51.013 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:51.013 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.013 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:51.013 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.013 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:51.013 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56806_0 00:04:51.013 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:51.013 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56806_0 00:04:51.013 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:51.013 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56806_0 00:04:51.013 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:51.013 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.013 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:51.013 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.013 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:51.013 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56806 00:04:51.013 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:51.013 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56806 00:04:51.013 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:51.013 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56806 00:04:51.013 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:51.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.013 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:51.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.014 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:51.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.014 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:51.014 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.014 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:51.014 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56806 00:04:51.014 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:51.014 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56806 00:04:51.014 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:51.014 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56806 00:04:51.014 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:51.014 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56806 00:04:51.014 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:51.014 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56806 00:04:51.014 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:51.014 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.014 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:51.014 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.014 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:51.014 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.014 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:51.014 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56806 00:04:51.014 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:51.014 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.014 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:51.014 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.014 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:51.014 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56806 00:04:51.014 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:51.014 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.014 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:51.014 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56806 00:04:51.014 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:51.014 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56806 00:04:51.014 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:51.014 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.014 23:38:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.014 23:38:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56806 00:04:51.014 23:38:21 -- common/autotest_common.sh@936 -- # '[' -z 56806 ']' 00:04:51.014 23:38:21 -- common/autotest_common.sh@940 -- # kill -0 56806 00:04:51.014 23:38:21 -- common/autotest_common.sh@941 -- # uname 00:04:51.014 23:38:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.014 23:38:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56806 00:04:51.014 killing process with pid 56806 00:04:51.014 23:38:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.014 23:38:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.014 23:38:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56806' 00:04:51.014 23:38:21 -- common/autotest_common.sh@955 -- # kill 56806 00:04:51.014 23:38:21 -- common/autotest_common.sh@960 -- # wait 56806 00:04:52.407 00:04:52.407 real 0m2.354s 00:04:52.407 user 0m2.357s 00:04:52.407 sys 0m0.382s 00:04:52.407 23:38:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.407 ************************************ 00:04:52.407 END TEST dpdk_mem_utility 00:04:52.407 ************************************ 00:04:52.407 23:38:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.407 23:38:22 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.407 23:38:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.407 23:38:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.407 ************************************ 00:04:52.407 START TEST event 00:04:52.407 ************************************ 00:04:52.407 23:38:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.407 * Looking for test storage... 00:04:52.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:52.407 23:38:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.407 23:38:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.407 23:38:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.407 23:38:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.407 23:38:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.407 23:38:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.407 23:38:22 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.407 23:38:22 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.407 23:38:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.407 23:38:22 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.407 23:38:22 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.407 23:38:22 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.407 23:38:22 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.407 23:38:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.407 23:38:22 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.407 23:38:22 -- scripts/common.sh@344 -- # : 1 00:04:52.407 23:38:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.407 23:38:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.407 23:38:22 -- scripts/common.sh@364 -- # decimal 1 00:04:52.407 23:38:22 -- scripts/common.sh@352 -- # local d=1 00:04:52.407 23:38:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.407 23:38:22 -- scripts/common.sh@354 -- # echo 1 00:04:52.407 23:38:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.407 23:38:22 -- scripts/common.sh@365 -- # decimal 2 00:04:52.407 23:38:22 -- scripts/common.sh@352 -- # local d=2 00:04:52.407 23:38:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.407 23:38:22 -- scripts/common.sh@354 -- # echo 2 00:04:52.407 23:38:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.407 23:38:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.407 23:38:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.407 23:38:22 -- scripts/common.sh@367 -- # return 0 00:04:52.407 23:38:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.407 --rc genhtml_branch_coverage=1 00:04:52.407 --rc genhtml_function_coverage=1 00:04:52.407 --rc genhtml_legend=1 00:04:52.407 --rc geninfo_all_blocks=1 00:04:52.407 --rc geninfo_unexecuted_blocks=1 00:04:52.407 00:04:52.407 ' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.407 --rc genhtml_branch_coverage=1 00:04:52.407 --rc genhtml_function_coverage=1 00:04:52.407 --rc genhtml_legend=1 00:04:52.407 --rc geninfo_all_blocks=1 00:04:52.407 --rc geninfo_unexecuted_blocks=1 00:04:52.407 00:04:52.407 ' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.407 --rc genhtml_branch_coverage=1 00:04:52.407 --rc genhtml_function_coverage=1 00:04:52.407 --rc genhtml_legend=1 00:04:52.407 --rc geninfo_all_blocks=1 00:04:52.407 --rc geninfo_unexecuted_blocks=1 00:04:52.407 00:04:52.407 ' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.407 --rc genhtml_branch_coverage=1 00:04:52.407 --rc genhtml_function_coverage=1 00:04:52.407 --rc genhtml_legend=1 00:04:52.407 --rc geninfo_all_blocks=1 00:04:52.407 --rc geninfo_unexecuted_blocks=1 00:04:52.407 00:04:52.407 ' 00:04:52.407 23:38:22 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:52.407 23:38:22 -- bdev/nbd_common.sh@6 -- # set -e 00:04:52.407 23:38:22 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.407 23:38:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:52.407 23:38:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.407 23:38:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.407 ************************************ 00:04:52.407 START TEST event_perf 00:04:52.407 ************************************ 00:04:52.407 23:38:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.407 Running I/O for 1 seconds...[2024-12-13 23:38:22.974041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.407 [2024-12-13 23:38:22.974224] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56891 ] 00:04:52.407 [2024-12-13 23:38:23.125126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.682 [2024-12-13 23:38:23.305008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.682 [2024-12-13 23:38:23.305085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.682 [2024-12-13 23:38:23.305726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.682 Running I/O for 1 seconds...[2024-12-13 23:38:23.305741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.055 00:04:54.055 lcore 0: 152195 00:04:54.055 lcore 1: 152192 00:04:54.055 lcore 2: 152191 00:04:54.055 lcore 3: 152192 00:04:54.055 done. 00:04:54.055 00:04:54.055 real 0m1.636s 00:04:54.055 user 0m4.430s 00:04:54.055 sys 0m0.086s 00:04:54.055 23:38:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.055 23:38:24 -- common/autotest_common.sh@10 -- # set +x 00:04:54.055 ************************************ 00:04:54.055 END TEST event_perf 00:04:54.055 ************************************ 00:04:54.055 23:38:24 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.055 23:38:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:54.055 23:38:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.055 23:38:24 -- common/autotest_common.sh@10 -- # set +x 00:04:54.055 ************************************ 00:04:54.055 START TEST event_reactor 00:04:54.055 ************************************ 00:04:54.055 23:38:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.055 [2024-12-13 23:38:24.647782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.055 [2024-12-13 23:38:24.648036] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56936 ] 00:04:54.313 [2024-12-13 23:38:24.796427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.313 [2024-12-13 23:38:24.966453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.683 test_start 00:04:55.683 oneshot 00:04:55.683 tick 100 00:04:55.683 tick 100 00:04:55.683 tick 250 00:04:55.683 tick 100 00:04:55.683 tick 100 00:04:55.683 tick 250 00:04:55.683 tick 100 00:04:55.683 tick 500 00:04:55.683 tick 100 00:04:55.683 tick 100 00:04:55.683 tick 250 00:04:55.683 tick 100 00:04:55.683 tick 100 00:04:55.683 test_end 00:04:55.683 00:04:55.683 real 0m1.553s 00:04:55.683 user 0m1.374s 00:04:55.683 sys 0m0.069s 00:04:55.683 23:38:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.683 ************************************ 00:04:55.683 END TEST event_reactor 00:04:55.683 ************************************ 00:04:55.683 23:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:55.683 23:38:26 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.683 23:38:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:55.683 23:38:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.683 23:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:55.683 ************************************ 00:04:55.683 START TEST event_reactor_perf 00:04:55.683 ************************************ 00:04:55.683 23:38:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.683 [2024-12-13 23:38:26.250247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:55.683 [2024-12-13 23:38:26.250475] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56967 ] 00:04:55.683 [2024-12-13 23:38:26.394838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.940 [2024-12-13 23:38:26.538301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.312 test_start 00:04:57.312 test_end 00:04:57.312 Performance: 407276 events per second 00:04:57.312 ************************************ 00:04:57.312 END TEST event_reactor_perf 00:04:57.312 ************************************ 00:04:57.312 00:04:57.312 real 0m1.517s 00:04:57.312 user 0m1.342s 00:04:57.312 sys 0m0.067s 00:04:57.312 23:38:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.312 23:38:27 -- common/autotest_common.sh@10 -- # set +x 00:04:57.312 23:38:27 -- event/event.sh@49 -- # uname -s 00:04:57.312 23:38:27 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:57.312 23:38:27 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:57.312 23:38:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.313 23:38:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.313 23:38:27 -- common/autotest_common.sh@10 -- # set +x 00:04:57.313 ************************************ 00:04:57.313 START TEST event_scheduler 00:04:57.313 ************************************ 00:04:57.313 23:38:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:57.313 * Looking for test storage... 00:04:57.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:57.313 23:38:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.313 23:38:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.313 23:38:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.313 23:38:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.313 23:38:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.313 23:38:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.313 23:38:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.313 23:38:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.313 23:38:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.313 23:38:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.313 23:38:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.313 23:38:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.313 23:38:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.313 23:38:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.313 23:38:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.313 23:38:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.313 23:38:27 -- scripts/common.sh@344 -- # : 1 00:04:57.313 23:38:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.313 23:38:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.313 23:38:27 -- scripts/common.sh@364 -- # decimal 1 00:04:57.313 23:38:27 -- scripts/common.sh@352 -- # local d=1 00:04:57.313 23:38:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.313 23:38:27 -- scripts/common.sh@354 -- # echo 1 00:04:57.313 23:38:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.313 23:38:27 -- scripts/common.sh@365 -- # decimal 2 00:04:57.313 23:38:27 -- scripts/common.sh@352 -- # local d=2 00:04:57.313 23:38:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.313 23:38:27 -- scripts/common.sh@354 -- # echo 2 00:04:57.313 23:38:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.313 23:38:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.313 23:38:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.313 23:38:27 -- scripts/common.sh@367 -- # return 0 00:04:57.313 23:38:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.313 23:38:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 23:38:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 23:38:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 23:38:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 23:38:27 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:57.313 23:38:27 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57042 00:04:57.313 23:38:27 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:57.313 23:38:27 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.313 23:38:27 -- scheduler/scheduler.sh@37 -- # waitforlisten 57042 00:04:57.313 23:38:27 -- common/autotest_common.sh@829 -- # '[' -z 57042 ']' 00:04:57.313 23:38:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.313 23:38:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.313 23:38:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.313 23:38:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.313 23:38:27 -- common/autotest_common.sh@10 -- # set +x 00:04:57.313 [2024-12-13 23:38:27.988503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.313 [2024-12-13 23:38:27.988613] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57042 ] 00:04:57.573 [2024-12-13 23:38:28.137448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.832 [2024-12-13 23:38:28.346938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.832 [2024-12-13 23:38:28.347778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.832 [2024-12-13 23:38:28.347917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.832 [2024-12-13 23:38:28.347917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.092 23:38:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.092 23:38:28 -- common/autotest_common.sh@862 -- # return 0 00:04:58.092 23:38:28 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.092 23:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.092 23:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.092 POWER: Env isn't set yet! 00:04:58.092 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:58.092 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.092 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.093 POWER: Attempting to initialise PSTAT power management... 00:04:58.093 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.093 POWER: Cannot set governor of lcore 0 to performance 00:04:58.093 POWER: Attempting to initialise AMD PSTATE power management... 00:04:58.093 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.093 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.093 POWER: Attempting to initialise CPPC power management... 00:04:58.093 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.093 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.093 POWER: Attempting to initialise VM power management... 00:04:58.093 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:58.093 POWER: Unable to set Power Management Environment for lcore 0 00:04:58.093 [2024-12-13 23:38:28.769230] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:58.093 [2024-12-13 23:38:28.769249] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:58.093 [2024-12-13 23:38:28.769260] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:58.093 [2024-12-13 23:38:28.769277] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:58.093 [2024-12-13 23:38:28.769289] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:58.093 [2024-12-13 23:38:28.769296] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:58.093 23:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.093 23:38:28 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:58.093 23:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.093 23:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 [2024-12-13 23:38:29.010875] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:58.351 23:38:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.351 23:38:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 ************************************ 00:04:58.351 START TEST scheduler_create_thread 00:04:58.351 ************************************ 00:04:58.351 23:38:29 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 2 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 3 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 4 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 5 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 6 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 7 00:04:58.351 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 23:38:29 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:58.351 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.610 8 00:04:58.610 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.610 23:38:29 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:58.610 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.610 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.610 9 00:04:58.610 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.610 23:38:29 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:58.610 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.610 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.610 10 00:04:58.610 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.610 23:38:29 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:58.610 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.610 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.610 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.610 23:38:29 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:58.610 23:38:29 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:58.610 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.610 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.610 23:38:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.610 23:38:29 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:58.610 23:38:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.610 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.989 23:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.989 23:38:30 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.989 23:38:30 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.989 23:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.989 23:38:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.923 ************************************ 00:05:00.923 END TEST scheduler_create_thread 00:05:00.923 ************************************ 00:05:00.923 23:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.923 00:05:00.923 real 0m2.615s 00:05:00.923 user 0m0.015s 00:05:00.923 sys 0m0.005s 00:05:00.923 23:38:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.923 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.181 23:38:31 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:01.181 23:38:31 -- scheduler/scheduler.sh@46 -- # killprocess 57042 00:05:01.181 23:38:31 -- common/autotest_common.sh@936 -- # '[' -z 57042 ']' 00:05:01.181 23:38:31 -- common/autotest_common.sh@940 -- # kill -0 57042 00:05:01.181 23:38:31 -- common/autotest_common.sh@941 -- # uname 00:05:01.181 23:38:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.181 23:38:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57042 00:05:01.181 killing process with pid 57042 00:05:01.181 23:38:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:01.181 23:38:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:01.181 23:38:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57042' 00:05:01.181 23:38:31 -- common/autotest_common.sh@955 -- # kill 57042 00:05:01.181 23:38:31 -- common/autotest_common.sh@960 -- # wait 57042 00:05:01.439 [2024-12-13 23:38:32.121834] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:02.375 00:05:02.375 real 0m5.035s 00:05:02.375 user 0m8.262s 00:05:02.375 sys 0m0.341s 00:05:02.375 23:38:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.375 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.375 ************************************ 00:05:02.375 END TEST event_scheduler 00:05:02.375 ************************************ 00:05:02.375 23:38:32 -- event/event.sh@51 -- # modprobe -n nbd 00:05:02.375 23:38:32 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:02.375 23:38:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.375 23:38:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.375 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.376 ************************************ 00:05:02.376 START TEST app_repeat 00:05:02.376 ************************************ 00:05:02.376 23:38:32 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:02.376 23:38:32 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.376 23:38:32 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.376 23:38:32 -- event/event.sh@13 -- # local nbd_list 00:05:02.376 23:38:32 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.376 23:38:32 -- event/event.sh@14 -- # local bdev_list 00:05:02.376 23:38:32 -- event/event.sh@15 -- # local repeat_times=4 00:05:02.376 23:38:32 -- event/event.sh@17 -- # modprobe nbd 00:05:02.376 Process app_repeat pid: 57148 00:05:02.376 spdk_app_start Round 0 00:05:02.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.376 23:38:32 -- event/event.sh@19 -- # repeat_pid=57148 00:05:02.376 23:38:32 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.376 23:38:32 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57148' 00:05:02.376 23:38:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:02.376 23:38:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:02.376 23:38:32 -- event/event.sh@25 -- # waitforlisten 57148 /var/tmp/spdk-nbd.sock 00:05:02.376 23:38:32 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:02.376 23:38:32 -- common/autotest_common.sh@829 -- # '[' -z 57148 ']' 00:05:02.376 23:38:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.376 23:38:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.376 23:38:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.376 23:38:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.376 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.376 [2024-12-13 23:38:32.921808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.376 [2024-12-13 23:38:32.922029] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57148 ] 00:05:02.376 [2024-12-13 23:38:33.070023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.634 [2024-12-13 23:38:33.209696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.634 [2024-12-13 23:38:33.209769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.199 23:38:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.199 23:38:33 -- common/autotest_common.sh@862 -- # return 0 00:05:03.199 23:38:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.458 Malloc0 00:05:03.458 23:38:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.458 Malloc1 00:05:03.458 23:38:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@12 -- # local i 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.458 23:38:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.715 /dev/nbd0 00:05:03.715 23:38:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.715 23:38:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.715 23:38:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:03.715 23:38:34 -- common/autotest_common.sh@867 -- # local i 00:05:03.715 23:38:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.715 23:38:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.715 23:38:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:03.715 23:38:34 -- common/autotest_common.sh@871 -- # break 00:05:03.715 23:38:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.715 23:38:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.715 23:38:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.715 1+0 records in 00:05:03.715 1+0 records out 00:05:03.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288653 s, 14.2 MB/s 00:05:03.715 23:38:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.715 23:38:34 -- common/autotest_common.sh@884 -- # size=4096 00:05:03.715 23:38:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.715 23:38:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.715 23:38:34 -- common/autotest_common.sh@887 -- # return 0 00:05:03.715 23:38:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.715 23:38:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.715 23:38:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.973 /dev/nbd1 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.973 23:38:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:03.973 23:38:34 -- common/autotest_common.sh@867 -- # local i 00:05:03.973 23:38:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.973 23:38:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.973 23:38:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:03.973 23:38:34 -- common/autotest_common.sh@871 -- # break 00:05:03.973 23:38:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.973 23:38:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.973 23:38:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.973 1+0 records in 00:05:03.973 1+0 records out 00:05:03.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171024 s, 23.9 MB/s 00:05:03.973 23:38:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.973 23:38:34 -- common/autotest_common.sh@884 -- # size=4096 00:05:03.973 23:38:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.973 23:38:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.973 23:38:34 -- common/autotest_common.sh@887 -- # return 0 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.973 23:38:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.232 { 00:05:04.232 "nbd_device": "/dev/nbd0", 00:05:04.232 "bdev_name": "Malloc0" 00:05:04.232 }, 00:05:04.232 { 00:05:04.232 "nbd_device": "/dev/nbd1", 00:05:04.232 "bdev_name": "Malloc1" 00:05:04.232 } 00:05:04.232 ]' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.232 { 00:05:04.232 "nbd_device": "/dev/nbd0", 00:05:04.232 "bdev_name": "Malloc0" 00:05:04.232 }, 00:05:04.232 { 00:05:04.232 "nbd_device": "/dev/nbd1", 00:05:04.232 "bdev_name": "Malloc1" 00:05:04.232 } 00:05:04.232 ]' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.232 /dev/nbd1' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.232 /dev/nbd1' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.232 256+0 records in 00:05:04.232 256+0 records out 00:05:04.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656394 s, 160 MB/s 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.232 256+0 records in 00:05:04.232 256+0 records out 00:05:04.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016121 s, 65.0 MB/s 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.232 256+0 records in 00:05:04.232 256+0 records out 00:05:04.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200012 s, 52.4 MB/s 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@51 -- # local i 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.232 23:38:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@41 -- # break 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.490 23:38:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@41 -- # break 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.748 23:38:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@65 -- # true 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.006 23:38:35 -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.006 23:38:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.264 23:38:35 -- event/event.sh@35 -- # sleep 3 00:05:05.829 [2024-12-13 23:38:36.428669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.087 [2024-12-13 23:38:36.563999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.087 [2024-12-13 23:38:36.564004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.087 [2024-12-13 23:38:36.669310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.087 [2024-12-13 23:38:36.669362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.616 spdk_app_start Round 1 00:05:08.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.616 23:38:38 -- event/event.sh@23 -- # for i in {0..2} 00:05:08.616 23:38:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:08.616 23:38:38 -- event/event.sh@25 -- # waitforlisten 57148 /var/tmp/spdk-nbd.sock 00:05:08.616 23:38:38 -- common/autotest_common.sh@829 -- # '[' -z 57148 ']' 00:05:08.616 23:38:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.616 23:38:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.616 23:38:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.616 23:38:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.616 23:38:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.616 23:38:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.616 23:38:39 -- common/autotest_common.sh@862 -- # return 0 00:05:08.616 23:38:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.616 Malloc0 00:05:08.616 23:38:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.874 Malloc1 00:05:08.874 23:38:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@12 -- # local i 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.874 23:38:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.132 /dev/nbd0 00:05:09.132 23:38:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.132 23:38:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.132 23:38:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:09.132 23:38:39 -- common/autotest_common.sh@867 -- # local i 00:05:09.132 23:38:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.132 23:38:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.132 23:38:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:09.132 23:38:39 -- common/autotest_common.sh@871 -- # break 00:05:09.132 23:38:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.132 23:38:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.132 23:38:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.132 1+0 records in 00:05:09.132 1+0 records out 00:05:09.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177058 s, 23.1 MB/s 00:05:09.132 23:38:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.132 23:38:39 -- common/autotest_common.sh@884 -- # size=4096 00:05:09.132 23:38:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.132 23:38:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.132 23:38:39 -- common/autotest_common.sh@887 -- # return 0 00:05:09.132 23:38:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.132 23:38:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.132 23:38:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.132 /dev/nbd1 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.389 23:38:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:09.389 23:38:39 -- common/autotest_common.sh@867 -- # local i 00:05:09.389 23:38:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.389 23:38:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.389 23:38:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:09.389 23:38:39 -- common/autotest_common.sh@871 -- # break 00:05:09.389 23:38:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.389 23:38:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.389 23:38:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.389 1+0 records in 00:05:09.389 1+0 records out 00:05:09.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229913 s, 17.8 MB/s 00:05:09.389 23:38:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.389 23:38:39 -- common/autotest_common.sh@884 -- # size=4096 00:05:09.389 23:38:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.389 23:38:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.389 23:38:39 -- common/autotest_common.sh@887 -- # return 0 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.389 23:38:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.389 23:38:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.389 { 00:05:09.389 "nbd_device": "/dev/nbd0", 00:05:09.389 "bdev_name": "Malloc0" 00:05:09.389 }, 00:05:09.389 { 00:05:09.389 "nbd_device": "/dev/nbd1", 00:05:09.389 "bdev_name": "Malloc1" 00:05:09.389 } 00:05:09.389 ]' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.390 { 00:05:09.390 "nbd_device": "/dev/nbd0", 00:05:09.390 "bdev_name": "Malloc0" 00:05:09.390 }, 00:05:09.390 { 00:05:09.390 "nbd_device": "/dev/nbd1", 00:05:09.390 "bdev_name": "Malloc1" 00:05:09.390 } 00:05:09.390 ]' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.390 /dev/nbd1' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.390 /dev/nbd1' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.390 23:38:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.647 256+0 records in 00:05:09.647 256+0 records out 00:05:09.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781885 s, 134 MB/s 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.647 256+0 records in 00:05:09.647 256+0 records out 00:05:09.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177743 s, 59.0 MB/s 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.647 256+0 records in 00:05:09.647 256+0 records out 00:05:09.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223627 s, 46.9 MB/s 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.647 23:38:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@51 -- # local i 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.648 23:38:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@41 -- # break 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@41 -- # break 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.905 23:38:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@65 -- # true 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.163 23:38:40 -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.163 23:38:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.421 23:38:41 -- event/event.sh@35 -- # sleep 3 00:05:11.354 [2024-12-13 23:38:41.874394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.354 [2024-12-13 23:38:42.024045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.354 [2024-12-13 23:38:42.024168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.612 [2024-12-13 23:38:42.130156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.612 [2024-12-13 23:38:42.130211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.512 spdk_app_start Round 2 00:05:13.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.512 23:38:44 -- event/event.sh@23 -- # for i in {0..2} 00:05:13.512 23:38:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:13.512 23:38:44 -- event/event.sh@25 -- # waitforlisten 57148 /var/tmp/spdk-nbd.sock 00:05:13.512 23:38:44 -- common/autotest_common.sh@829 -- # '[' -z 57148 ']' 00:05:13.512 23:38:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.512 23:38:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.512 23:38:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.512 23:38:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.512 23:38:44 -- common/autotest_common.sh@10 -- # set +x 00:05:13.770 23:38:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.770 23:38:44 -- common/autotest_common.sh@862 -- # return 0 00:05:13.770 23:38:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.028 Malloc0 00:05:14.028 23:38:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.028 Malloc1 00:05:14.028 23:38:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@12 -- # local i 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.028 23:38:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.355 /dev/nbd0 00:05:14.355 23:38:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.355 23:38:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.355 23:38:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:14.355 23:38:44 -- common/autotest_common.sh@867 -- # local i 00:05:14.355 23:38:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.355 23:38:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.355 23:38:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:14.355 23:38:44 -- common/autotest_common.sh@871 -- # break 00:05:14.355 23:38:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.355 23:38:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.355 23:38:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.355 1+0 records in 00:05:14.355 1+0 records out 00:05:14.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458003 s, 8.9 MB/s 00:05:14.355 23:38:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.355 23:38:44 -- common/autotest_common.sh@884 -- # size=4096 00:05:14.355 23:38:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.355 23:38:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.355 23:38:44 -- common/autotest_common.sh@887 -- # return 0 00:05:14.355 23:38:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.355 23:38:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.355 23:38:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.612 /dev/nbd1 00:05:14.612 23:38:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.612 23:38:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.612 23:38:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.612 23:38:45 -- common/autotest_common.sh@867 -- # local i 00:05:14.612 23:38:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.612 23:38:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.612 23:38:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.612 23:38:45 -- common/autotest_common.sh@871 -- # break 00:05:14.612 23:38:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.612 23:38:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.612 23:38:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.612 1+0 records in 00:05:14.612 1+0 records out 00:05:14.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019408 s, 21.1 MB/s 00:05:14.613 23:38:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.613 23:38:45 -- common/autotest_common.sh@884 -- # size=4096 00:05:14.613 23:38:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.613 23:38:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.613 23:38:45 -- common/autotest_common.sh@887 -- # return 0 00:05:14.613 23:38:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.613 23:38:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.613 23:38:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.613 23:38:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.613 23:38:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.871 { 00:05:14.871 "nbd_device": "/dev/nbd0", 00:05:14.871 "bdev_name": "Malloc0" 00:05:14.871 }, 00:05:14.871 { 00:05:14.871 "nbd_device": "/dev/nbd1", 00:05:14.871 "bdev_name": "Malloc1" 00:05:14.871 } 00:05:14.871 ]' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.871 { 00:05:14.871 "nbd_device": "/dev/nbd0", 00:05:14.871 "bdev_name": "Malloc0" 00:05:14.871 }, 00:05:14.871 { 00:05:14.871 "nbd_device": "/dev/nbd1", 00:05:14.871 "bdev_name": "Malloc1" 00:05:14.871 } 00:05:14.871 ]' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.871 /dev/nbd1' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.871 /dev/nbd1' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.871 256+0 records in 00:05:14.871 256+0 records out 00:05:14.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111986 s, 93.6 MB/s 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.871 256+0 records in 00:05:14.871 256+0 records out 00:05:14.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020082 s, 52.2 MB/s 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.871 256+0 records in 00:05:14.871 256+0 records out 00:05:14.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249551 s, 42.0 MB/s 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.871 23:38:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.872 23:38:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@41 -- # break 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.130 23:38:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@41 -- # break 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.388 23:38:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.388 23:38:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.388 23:38:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.388 23:38:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@65 -- # true 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.646 23:38:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.646 23:38:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.904 23:38:46 -- event/event.sh@35 -- # sleep 3 00:05:16.470 [2024-12-13 23:38:47.157615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.729 [2024-12-13 23:38:47.296289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.729 [2024-12-13 23:38:47.296401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.729 [2024-12-13 23:38:47.401005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.729 [2024-12-13 23:38:47.401062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.259 23:38:49 -- event/event.sh@38 -- # waitforlisten 57148 /var/tmp/spdk-nbd.sock 00:05:19.259 23:38:49 -- common/autotest_common.sh@829 -- # '[' -z 57148 ']' 00:05:19.259 23:38:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.259 23:38:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.259 23:38:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.259 23:38:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.259 23:38:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.259 23:38:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.259 23:38:49 -- common/autotest_common.sh@862 -- # return 0 00:05:19.259 23:38:49 -- event/event.sh@39 -- # killprocess 57148 00:05:19.259 23:38:49 -- common/autotest_common.sh@936 -- # '[' -z 57148 ']' 00:05:19.259 23:38:49 -- common/autotest_common.sh@940 -- # kill -0 57148 00:05:19.259 23:38:49 -- common/autotest_common.sh@941 -- # uname 00:05:19.259 23:38:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.259 23:38:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57148 00:05:19.259 killing process with pid 57148 00:05:19.259 23:38:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.259 23:38:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.259 23:38:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57148' 00:05:19.259 23:38:49 -- common/autotest_common.sh@955 -- # kill 57148 00:05:19.259 23:38:49 -- common/autotest_common.sh@960 -- # wait 57148 00:05:19.520 spdk_app_start is called in Round 0. 00:05:19.520 Shutdown signal received, stop current app iteration 00:05:19.520 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:19.520 spdk_app_start is called in Round 1. 00:05:19.520 Shutdown signal received, stop current app iteration 00:05:19.520 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:19.520 spdk_app_start is called in Round 2. 00:05:19.520 Shutdown signal received, stop current app iteration 00:05:19.520 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:19.520 spdk_app_start is called in Round 3. 00:05:19.520 Shutdown signal received, stop current app iteration 00:05:19.520 23:38:50 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.520 23:38:50 -- event/event.sh@42 -- # return 0 00:05:19.520 00:05:19.520 real 0m17.356s 00:05:19.520 user 0m37.165s 00:05:19.520 sys 0m2.037s 00:05:19.520 23:38:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.520 ************************************ 00:05:19.520 END TEST app_repeat 00:05:19.520 ************************************ 00:05:19.520 23:38:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.780 23:38:50 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.780 23:38:50 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.780 23:38:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.780 23:38:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.780 23:38:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.780 ************************************ 00:05:19.780 START TEST cpu_locks 00:05:19.780 ************************************ 00:05:19.780 23:38:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.780 * Looking for test storage... 00:05:19.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:19.780 23:38:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.780 23:38:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.780 23:38:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.780 23:38:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.780 23:38:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.780 23:38:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.781 23:38:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.781 23:38:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.781 23:38:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.781 23:38:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.781 23:38:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.781 23:38:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.781 23:38:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.781 23:38:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.781 23:38:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.781 23:38:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.781 23:38:50 -- scripts/common.sh@344 -- # : 1 00:05:19.781 23:38:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.781 23:38:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.781 23:38:50 -- scripts/common.sh@364 -- # decimal 1 00:05:19.781 23:38:50 -- scripts/common.sh@352 -- # local d=1 00:05:19.781 23:38:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.781 23:38:50 -- scripts/common.sh@354 -- # echo 1 00:05:19.781 23:38:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.781 23:38:50 -- scripts/common.sh@365 -- # decimal 2 00:05:19.781 23:38:50 -- scripts/common.sh@352 -- # local d=2 00:05:19.781 23:38:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.781 23:38:50 -- scripts/common.sh@354 -- # echo 2 00:05:19.781 23:38:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.781 23:38:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.781 23:38:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.781 23:38:50 -- scripts/common.sh@367 -- # return 0 00:05:19.781 23:38:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.781 23:38:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.781 --rc genhtml_branch_coverage=1 00:05:19.781 --rc genhtml_function_coverage=1 00:05:19.781 --rc genhtml_legend=1 00:05:19.781 --rc geninfo_all_blocks=1 00:05:19.781 --rc geninfo_unexecuted_blocks=1 00:05:19.781 00:05:19.781 ' 00:05:19.781 23:38:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.781 --rc genhtml_branch_coverage=1 00:05:19.781 --rc genhtml_function_coverage=1 00:05:19.781 --rc genhtml_legend=1 00:05:19.781 --rc geninfo_all_blocks=1 00:05:19.781 --rc geninfo_unexecuted_blocks=1 00:05:19.781 00:05:19.781 ' 00:05:19.781 23:38:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.781 --rc genhtml_branch_coverage=1 00:05:19.781 --rc genhtml_function_coverage=1 00:05:19.781 --rc genhtml_legend=1 00:05:19.781 --rc geninfo_all_blocks=1 00:05:19.781 --rc geninfo_unexecuted_blocks=1 00:05:19.781 00:05:19.781 ' 00:05:19.781 23:38:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.781 --rc genhtml_branch_coverage=1 00:05:19.781 --rc genhtml_function_coverage=1 00:05:19.781 --rc genhtml_legend=1 00:05:19.781 --rc geninfo_all_blocks=1 00:05:19.781 --rc geninfo_unexecuted_blocks=1 00:05:19.781 00:05:19.781 ' 00:05:19.781 23:38:50 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.781 23:38:50 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.781 23:38:50 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.781 23:38:50 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.781 23:38:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.781 23:38:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.781 23:38:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.781 ************************************ 00:05:19.781 START TEST default_locks 00:05:19.781 ************************************ 00:05:19.781 23:38:50 -- common/autotest_common.sh@1114 -- # default_locks 00:05:19.781 23:38:50 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57572 00:05:19.781 23:38:50 -- event/cpu_locks.sh@47 -- # waitforlisten 57572 00:05:19.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.781 23:38:50 -- common/autotest_common.sh@829 -- # '[' -z 57572 ']' 00:05:19.781 23:38:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.781 23:38:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.781 23:38:50 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.781 23:38:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.781 23:38:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.781 23:38:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 [2024-12-13 23:38:50.512063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.040 [2024-12-13 23:38:50.512173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57572 ] 00:05:20.040 [2024-12-13 23:38:50.661113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.299 [2024-12-13 23:38:50.798744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.299 [2024-12-13 23:38:50.798999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.556 23:38:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.556 23:38:51 -- common/autotest_common.sh@862 -- # return 0 00:05:20.556 23:38:51 -- event/cpu_locks.sh@49 -- # locks_exist 57572 00:05:20.556 23:38:51 -- event/cpu_locks.sh@22 -- # lslocks -p 57572 00:05:20.556 23:38:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.814 23:38:51 -- event/cpu_locks.sh@50 -- # killprocess 57572 00:05:20.814 23:38:51 -- common/autotest_common.sh@936 -- # '[' -z 57572 ']' 00:05:20.814 23:38:51 -- common/autotest_common.sh@940 -- # kill -0 57572 00:05:20.814 23:38:51 -- common/autotest_common.sh@941 -- # uname 00:05:20.814 23:38:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.814 23:38:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57572 00:05:20.814 killing process with pid 57572 00:05:20.814 23:38:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.814 23:38:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.814 23:38:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57572' 00:05:20.814 23:38:51 -- common/autotest_common.sh@955 -- # kill 57572 00:05:20.814 23:38:51 -- common/autotest_common.sh@960 -- # wait 57572 00:05:22.223 23:38:52 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57572 00:05:22.223 23:38:52 -- common/autotest_common.sh@650 -- # local es=0 00:05:22.223 23:38:52 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57572 00:05:22.223 23:38:52 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:22.223 23:38:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.223 23:38:52 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:22.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.223 ERROR: process (pid: 57572) is no longer running 00:05:22.223 23:38:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.223 23:38:52 -- common/autotest_common.sh@653 -- # waitforlisten 57572 00:05:22.223 23:38:52 -- common/autotest_common.sh@829 -- # '[' -z 57572 ']' 00:05:22.223 23:38:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.223 23:38:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.223 23:38:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.223 23:38:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.223 23:38:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57572) - No such process 00:05:22.223 23:38:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.223 23:38:52 -- common/autotest_common.sh@862 -- # return 1 00:05:22.223 23:38:52 -- common/autotest_common.sh@653 -- # es=1 00:05:22.223 23:38:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.223 23:38:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.223 23:38:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.223 23:38:52 -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.223 23:38:52 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.223 23:38:52 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.223 23:38:52 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.223 00:05:22.223 real 0m2.195s 00:05:22.223 user 0m2.150s 00:05:22.223 sys 0m0.405s 00:05:22.223 23:38:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.223 23:38:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.223 ************************************ 00:05:22.223 END TEST default_locks 00:05:22.223 ************************************ 00:05:22.223 23:38:52 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.223 23:38:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.223 23:38:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.223 23:38:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.223 ************************************ 00:05:22.223 START TEST default_locks_via_rpc 00:05:22.223 ************************************ 00:05:22.223 23:38:52 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:22.223 23:38:52 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57625 00:05:22.223 23:38:52 -- event/cpu_locks.sh@63 -- # waitforlisten 57625 00:05:22.223 23:38:52 -- common/autotest_common.sh@829 -- # '[' -z 57625 ']' 00:05:22.223 23:38:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.223 23:38:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.223 23:38:52 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.223 23:38:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.223 23:38:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.223 23:38:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.223 [2024-12-13 23:38:52.749620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.223 [2024-12-13 23:38:52.749812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:05:22.223 [2024-12-13 23:38:52.890636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.481 [2024-12-13 23:38:53.030604] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.481 [2024-12-13 23:38:53.030870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.047 23:38:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.047 23:38:53 -- common/autotest_common.sh@862 -- # return 0 00:05:23.047 23:38:53 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:23.047 23:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.047 23:38:53 -- common/autotest_common.sh@10 -- # set +x 00:05:23.047 23:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.047 23:38:53 -- event/cpu_locks.sh@67 -- # no_locks 00:05:23.047 23:38:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.047 23:38:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.047 23:38:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.047 23:38:53 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.047 23:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.047 23:38:53 -- common/autotest_common.sh@10 -- # set +x 00:05:23.047 23:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.047 23:38:53 -- event/cpu_locks.sh@71 -- # locks_exist 57625 00:05:23.047 23:38:53 -- event/cpu_locks.sh@22 -- # lslocks -p 57625 00:05:23.047 23:38:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.047 23:38:53 -- event/cpu_locks.sh@73 -- # killprocess 57625 00:05:23.047 23:38:53 -- common/autotest_common.sh@936 -- # '[' -z 57625 ']' 00:05:23.047 23:38:53 -- common/autotest_common.sh@940 -- # kill -0 57625 00:05:23.047 23:38:53 -- common/autotest_common.sh@941 -- # uname 00:05:23.047 23:38:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.047 23:38:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57625 00:05:23.306 killing process with pid 57625 00:05:23.306 23:38:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.306 23:38:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.306 23:38:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57625' 00:05:23.306 23:38:53 -- common/autotest_common.sh@955 -- # kill 57625 00:05:23.306 23:38:53 -- common/autotest_common.sh@960 -- # wait 57625 00:05:24.239 00:05:24.239 real 0m2.259s 00:05:24.239 user 0m2.286s 00:05:24.239 sys 0m0.383s 00:05:24.239 23:38:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.239 ************************************ 00:05:24.239 END TEST default_locks_via_rpc 00:05:24.239 ************************************ 00:05:24.239 23:38:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.497 23:38:54 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:24.497 23:38:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.497 23:38:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.497 23:38:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.497 ************************************ 00:05:24.497 START TEST non_locking_app_on_locked_coremask 00:05:24.497 ************************************ 00:05:24.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.497 23:38:54 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:24.497 23:38:54 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57683 00:05:24.497 23:38:54 -- event/cpu_locks.sh@81 -- # waitforlisten 57683 /var/tmp/spdk.sock 00:05:24.497 23:38:54 -- common/autotest_common.sh@829 -- # '[' -z 57683 ']' 00:05:24.497 23:38:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.497 23:38:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.497 23:38:54 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.497 23:38:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.497 23:38:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.497 23:38:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.497 [2024-12-13 23:38:55.054551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.497 [2024-12-13 23:38:55.054660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57683 ] 00:05:24.497 [2024-12-13 23:38:55.201581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.754 [2024-12-13 23:38:55.340591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.754 [2024-12-13 23:38:55.340888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.319 23:38:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.319 23:38:55 -- common/autotest_common.sh@862 -- # return 0 00:05:25.319 23:38:55 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57699 00:05:25.319 23:38:55 -- event/cpu_locks.sh@85 -- # waitforlisten 57699 /var/tmp/spdk2.sock 00:05:25.319 23:38:55 -- common/autotest_common.sh@829 -- # '[' -z 57699 ']' 00:05:25.319 23:38:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.320 23:38:55 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:25.320 23:38:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.320 23:38:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.320 23:38:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.320 23:38:55 -- common/autotest_common.sh@10 -- # set +x 00:05:25.320 [2024-12-13 23:38:55.937096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.320 [2024-12-13 23:38:55.937208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57699 ] 00:05:25.578 [2024-12-13 23:38:56.085442] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.578 [2024-12-13 23:38:56.085479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.836 [2024-12-13 23:38:56.364923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.836 [2024-12-13 23:38:56.365096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.770 23:38:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.770 23:38:57 -- common/autotest_common.sh@862 -- # return 0 00:05:26.770 23:38:57 -- event/cpu_locks.sh@87 -- # locks_exist 57683 00:05:26.770 23:38:57 -- event/cpu_locks.sh@22 -- # lslocks -p 57683 00:05:26.770 23:38:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.028 23:38:57 -- event/cpu_locks.sh@89 -- # killprocess 57683 00:05:27.028 23:38:57 -- common/autotest_common.sh@936 -- # '[' -z 57683 ']' 00:05:27.028 23:38:57 -- common/autotest_common.sh@940 -- # kill -0 57683 00:05:27.028 23:38:57 -- common/autotest_common.sh@941 -- # uname 00:05:27.028 23:38:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.028 23:38:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57683 00:05:27.028 killing process with pid 57683 00:05:27.028 23:38:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.028 23:38:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.028 23:38:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57683' 00:05:27.028 23:38:57 -- common/autotest_common.sh@955 -- # kill 57683 00:05:27.028 23:38:57 -- common/autotest_common.sh@960 -- # wait 57683 00:05:29.566 23:39:00 -- event/cpu_locks.sh@90 -- # killprocess 57699 00:05:29.566 23:39:00 -- common/autotest_common.sh@936 -- # '[' -z 57699 ']' 00:05:29.566 23:39:00 -- common/autotest_common.sh@940 -- # kill -0 57699 00:05:29.566 23:39:00 -- common/autotest_common.sh@941 -- # uname 00:05:29.566 23:39:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.566 23:39:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57699 00:05:29.566 killing process with pid 57699 00:05:29.566 23:39:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.566 23:39:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.566 23:39:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57699' 00:05:29.566 23:39:00 -- common/autotest_common.sh@955 -- # kill 57699 00:05:29.566 23:39:00 -- common/autotest_common.sh@960 -- # wait 57699 00:05:30.957 00:05:30.957 real 0m6.305s 00:05:30.957 user 0m6.692s 00:05:30.957 sys 0m0.794s 00:05:30.957 23:39:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.957 ************************************ 00:05:30.957 END TEST non_locking_app_on_locked_coremask 00:05:30.957 ************************************ 00:05:30.957 23:39:01 -- common/autotest_common.sh@10 -- # set +x 00:05:30.957 23:39:01 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.957 23:39:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.957 23:39:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.957 23:39:01 -- common/autotest_common.sh@10 -- # set +x 00:05:30.957 ************************************ 00:05:30.957 START TEST locking_app_on_unlocked_coremask 00:05:30.957 ************************************ 00:05:30.957 23:39:01 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:30.957 23:39:01 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57792 00:05:30.957 23:39:01 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.957 23:39:01 -- event/cpu_locks.sh@99 -- # waitforlisten 57792 /var/tmp/spdk.sock 00:05:30.957 23:39:01 -- common/autotest_common.sh@829 -- # '[' -z 57792 ']' 00:05:30.957 23:39:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.957 23:39:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.957 23:39:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.957 23:39:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.957 23:39:01 -- common/autotest_common.sh@10 -- # set +x 00:05:30.957 [2024-12-13 23:39:01.411987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.957 [2024-12-13 23:39:01.412099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:05:30.957 [2024-12-13 23:39:01.557456] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.957 [2024-12-13 23:39:01.557499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.216 [2024-12-13 23:39:01.698697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.216 [2024-12-13 23:39:01.698843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.782 23:39:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.782 23:39:02 -- common/autotest_common.sh@862 -- # return 0 00:05:31.782 23:39:02 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57808 00:05:31.782 23:39:02 -- event/cpu_locks.sh@103 -- # waitforlisten 57808 /var/tmp/spdk2.sock 00:05:31.782 23:39:02 -- common/autotest_common.sh@829 -- # '[' -z 57808 ']' 00:05:31.782 23:39:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.782 23:39:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.782 23:39:02 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.782 23:39:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.782 23:39:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.782 23:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:31.782 [2024-12-13 23:39:02.277555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.782 [2024-12-13 23:39:02.277842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57808 ] 00:05:31.783 [2024-12-13 23:39:02.426587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.040 [2024-12-13 23:39:02.706859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.040 [2024-12-13 23:39:02.707009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.417 23:39:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.417 23:39:03 -- common/autotest_common.sh@862 -- # return 0 00:05:33.417 23:39:03 -- event/cpu_locks.sh@105 -- # locks_exist 57808 00:05:33.417 23:39:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.417 23:39:03 -- event/cpu_locks.sh@22 -- # lslocks -p 57808 00:05:33.417 23:39:04 -- event/cpu_locks.sh@107 -- # killprocess 57792 00:05:33.417 23:39:04 -- common/autotest_common.sh@936 -- # '[' -z 57792 ']' 00:05:33.417 23:39:04 -- common/autotest_common.sh@940 -- # kill -0 57792 00:05:33.417 23:39:04 -- common/autotest_common.sh@941 -- # uname 00:05:33.417 23:39:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.417 23:39:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57792 00:05:33.417 killing process with pid 57792 00:05:33.417 23:39:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.417 23:39:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.417 23:39:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57792' 00:05:33.417 23:39:04 -- common/autotest_common.sh@955 -- # kill 57792 00:05:33.417 23:39:04 -- common/autotest_common.sh@960 -- # wait 57792 00:05:36.718 23:39:07 -- event/cpu_locks.sh@108 -- # killprocess 57808 00:05:36.718 23:39:07 -- common/autotest_common.sh@936 -- # '[' -z 57808 ']' 00:05:36.718 23:39:07 -- common/autotest_common.sh@940 -- # kill -0 57808 00:05:36.718 23:39:07 -- common/autotest_common.sh@941 -- # uname 00:05:36.718 23:39:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.719 23:39:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57808 00:05:36.719 killing process with pid 57808 00:05:36.719 23:39:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.719 23:39:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.719 23:39:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57808' 00:05:36.719 23:39:07 -- common/autotest_common.sh@955 -- # kill 57808 00:05:36.719 23:39:07 -- common/autotest_common.sh@960 -- # wait 57808 00:05:38.117 ************************************ 00:05:38.117 END TEST locking_app_on_unlocked_coremask 00:05:38.117 ************************************ 00:05:38.117 00:05:38.117 real 0m7.151s 00:05:38.117 user 0m7.528s 00:05:38.117 sys 0m0.798s 00:05:38.117 23:39:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.117 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:38.117 23:39:08 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:38.117 23:39:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.117 23:39:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.117 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:38.117 ************************************ 00:05:38.117 START TEST locking_app_on_locked_coremask 00:05:38.117 ************************************ 00:05:38.117 23:39:08 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:38.117 23:39:08 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57912 00:05:38.117 23:39:08 -- event/cpu_locks.sh@116 -- # waitforlisten 57912 /var/tmp/spdk.sock 00:05:38.117 23:39:08 -- common/autotest_common.sh@829 -- # '[' -z 57912 ']' 00:05:38.117 23:39:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.117 23:39:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.117 23:39:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.117 23:39:08 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.117 23:39:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.117 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:38.117 [2024-12-13 23:39:08.602927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.117 [2024-12-13 23:39:08.603017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57912 ] 00:05:38.117 [2024-12-13 23:39:08.737776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.374 [2024-12-13 23:39:08.879014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.374 [2024-12-13 23:39:08.879169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.939 23:39:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.939 23:39:09 -- common/autotest_common.sh@862 -- # return 0 00:05:38.939 23:39:09 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57928 00:05:38.939 23:39:09 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57928 /var/tmp/spdk2.sock 00:05:38.939 23:39:09 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.939 23:39:09 -- common/autotest_common.sh@650 -- # local es=0 00:05:38.939 23:39:09 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57928 /var/tmp/spdk2.sock 00:05:38.939 23:39:09 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:38.939 23:39:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.939 23:39:09 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:38.939 23:39:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.939 23:39:09 -- common/autotest_common.sh@653 -- # waitforlisten 57928 /var/tmp/spdk2.sock 00:05:38.939 23:39:09 -- common/autotest_common.sh@829 -- # '[' -z 57928 ']' 00:05:38.939 23:39:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.940 23:39:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.940 23:39:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.940 23:39:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.940 23:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:38.940 [2024-12-13 23:39:09.502144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.940 [2024-12-13 23:39:09.502267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57928 ] 00:05:38.940 [2024-12-13 23:39:09.651613] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57912 has claimed it. 00:05:38.940 [2024-12-13 23:39:09.651661] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57928) - No such process 00:05:39.505 ERROR: process (pid: 57928) is no longer running 00:05:39.505 23:39:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.505 23:39:10 -- common/autotest_common.sh@862 -- # return 1 00:05:39.505 23:39:10 -- common/autotest_common.sh@653 -- # es=1 00:05:39.505 23:39:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.505 23:39:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.505 23:39:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.505 23:39:10 -- event/cpu_locks.sh@122 -- # locks_exist 57912 00:05:39.505 23:39:10 -- event/cpu_locks.sh@22 -- # lslocks -p 57912 00:05:39.505 23:39:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.764 23:39:10 -- event/cpu_locks.sh@124 -- # killprocess 57912 00:05:39.764 23:39:10 -- common/autotest_common.sh@936 -- # '[' -z 57912 ']' 00:05:39.764 23:39:10 -- common/autotest_common.sh@940 -- # kill -0 57912 00:05:39.764 23:39:10 -- common/autotest_common.sh@941 -- # uname 00:05:39.764 23:39:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.764 23:39:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57912 00:05:39.764 23:39:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.764 23:39:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.764 killing process with pid 57912 00:05:39.764 23:39:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57912' 00:05:39.764 23:39:10 -- common/autotest_common.sh@955 -- # kill 57912 00:05:39.764 23:39:10 -- common/autotest_common.sh@960 -- # wait 57912 00:05:41.138 00:05:41.138 real 0m2.990s 00:05:41.138 user 0m3.208s 00:05:41.138 sys 0m0.506s 00:05:41.138 ************************************ 00:05:41.138 END TEST locking_app_on_locked_coremask 00:05:41.138 ************************************ 00:05:41.138 23:39:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.138 23:39:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 23:39:11 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.138 23:39:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.138 23:39:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.138 23:39:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 ************************************ 00:05:41.138 START TEST locking_overlapped_coremask 00:05:41.138 ************************************ 00:05:41.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.138 23:39:11 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:41.138 23:39:11 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57981 00:05:41.138 23:39:11 -- event/cpu_locks.sh@133 -- # waitforlisten 57981 /var/tmp/spdk.sock 00:05:41.138 23:39:11 -- common/autotest_common.sh@829 -- # '[' -z 57981 ']' 00:05:41.138 23:39:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.138 23:39:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.138 23:39:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.138 23:39:11 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.138 23:39:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.138 23:39:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 [2024-12-13 23:39:11.662549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.138 [2024-12-13 23:39:11.663782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57981 ] 00:05:41.138 [2024-12-13 23:39:11.818340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.396 [2024-12-13 23:39:11.964146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.396 [2024-12-13 23:39:11.964536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.397 [2024-12-13 23:39:11.964756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.397 [2024-12-13 23:39:11.964835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.962 23:39:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.962 23:39:12 -- common/autotest_common.sh@862 -- # return 0 00:05:41.962 23:39:12 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57999 00:05:41.962 23:39:12 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57999 /var/tmp/spdk2.sock 00:05:41.962 23:39:12 -- common/autotest_common.sh@650 -- # local es=0 00:05:41.962 23:39:12 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57999 /var/tmp/spdk2.sock 00:05:41.962 23:39:12 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.962 23:39:12 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.962 23:39:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.962 23:39:12 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.962 23:39:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.962 23:39:12 -- common/autotest_common.sh@653 -- # waitforlisten 57999 /var/tmp/spdk2.sock 00:05:41.962 23:39:12 -- common/autotest_common.sh@829 -- # '[' -z 57999 ']' 00:05:41.962 23:39:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.962 23:39:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.962 23:39:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.962 23:39:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.962 23:39:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.962 [2024-12-13 23:39:12.538682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.962 [2024-12-13 23:39:12.539278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:05:42.221 [2024-12-13 23:39:12.693637] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57981 has claimed it. 00:05:42.221 [2024-12-13 23:39:12.693689] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.479 ERROR: process (pid: 57999) is no longer running 00:05:42.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57999) - No such process 00:05:42.479 23:39:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.479 23:39:13 -- common/autotest_common.sh@862 -- # return 1 00:05:42.479 23:39:13 -- common/autotest_common.sh@653 -- # es=1 00:05:42.479 23:39:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.479 23:39:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.479 23:39:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.479 23:39:13 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:42.479 23:39:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.479 23:39:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.479 23:39:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.479 23:39:13 -- event/cpu_locks.sh@141 -- # killprocess 57981 00:05:42.479 23:39:13 -- common/autotest_common.sh@936 -- # '[' -z 57981 ']' 00:05:42.479 23:39:13 -- common/autotest_common.sh@940 -- # kill -0 57981 00:05:42.479 23:39:13 -- common/autotest_common.sh@941 -- # uname 00:05:42.479 23:39:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.479 23:39:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57981 00:05:42.479 23:39:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.479 23:39:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.479 23:39:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57981' 00:05:42.479 killing process with pid 57981 00:05:42.479 23:39:13 -- common/autotest_common.sh@955 -- # kill 57981 00:05:42.479 23:39:13 -- common/autotest_common.sh@960 -- # wait 57981 00:05:43.854 00:05:43.854 real 0m2.755s 00:05:43.854 user 0m7.190s 00:05:43.854 sys 0m0.410s 00:05:43.854 23:39:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.854 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.854 ************************************ 00:05:43.854 END TEST locking_overlapped_coremask 00:05:43.854 ************************************ 00:05:43.854 23:39:14 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.854 23:39:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.854 23:39:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.854 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.854 ************************************ 00:05:43.854 START TEST locking_overlapped_coremask_via_rpc 00:05:43.854 ************************************ 00:05:43.854 23:39:14 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:43.854 23:39:14 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58052 00:05:43.854 23:39:14 -- event/cpu_locks.sh@149 -- # waitforlisten 58052 /var/tmp/spdk.sock 00:05:43.854 23:39:14 -- common/autotest_common.sh@829 -- # '[' -z 58052 ']' 00:05:43.854 23:39:14 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.854 23:39:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.854 23:39:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.854 23:39:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.854 23:39:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.854 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.854 [2024-12-13 23:39:14.481582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.854 [2024-12-13 23:39:14.481824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58052 ] 00:05:44.113 [2024-12-13 23:39:14.629804] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.113 [2024-12-13 23:39:14.630321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.113 [2024-12-13 23:39:14.775547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.113 [2024-12-13 23:39:14.775912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.113 [2024-12-13 23:39:14.776247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.113 [2024-12-13 23:39:14.776176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.679 23:39:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.679 23:39:15 -- common/autotest_common.sh@862 -- # return 0 00:05:44.679 23:39:15 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58070 00:05:44.679 23:39:15 -- event/cpu_locks.sh@153 -- # waitforlisten 58070 /var/tmp/spdk2.sock 00:05:44.679 23:39:15 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.679 23:39:15 -- common/autotest_common.sh@829 -- # '[' -z 58070 ']' 00:05:44.679 23:39:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.679 23:39:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.679 23:39:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.679 23:39:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.679 23:39:15 -- common/autotest_common.sh@10 -- # set +x 00:05:44.679 [2024-12-13 23:39:15.346201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.679 [2024-12-13 23:39:15.346440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58070 ] 00:05:44.937 [2024-12-13 23:39:15.496996] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.937 [2024-12-13 23:39:15.497054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.195 [2024-12-13 23:39:15.904838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.195 [2024-12-13 23:39:15.905126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.195 [2024-12-13 23:39:15.908697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.195 [2024-12-13 23:39:15.908720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.167 23:39:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.167 23:39:17 -- common/autotest_common.sh@862 -- # return 0 00:05:47.167 23:39:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.167 23:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.167 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.167 23:39:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.167 23:39:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.167 23:39:17 -- common/autotest_common.sh@650 -- # local es=0 00:05:47.167 23:39:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.167 23:39:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:47.167 23:39:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.167 23:39:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:47.167 23:39:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.167 23:39:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.167 23:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.167 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.167 [2024-12-13 23:39:17.560680] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58052 has claimed it. 00:05:47.167 request: 00:05:47.167 { 00:05:47.167 "method": "framework_enable_cpumask_locks", 00:05:47.167 "req_id": 1 00:05:47.167 } 00:05:47.167 Got JSON-RPC error response 00:05:47.167 response: 00:05:47.167 { 00:05:47.167 "code": -32603, 00:05:47.167 "message": "Failed to claim CPU core: 2" 00:05:47.167 } 00:05:47.167 23:39:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:47.167 23:39:17 -- common/autotest_common.sh@653 -- # es=1 00:05:47.167 23:39:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.167 23:39:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.167 23:39:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.167 23:39:17 -- event/cpu_locks.sh@158 -- # waitforlisten 58052 /var/tmp/spdk.sock 00:05:47.167 23:39:17 -- common/autotest_common.sh@829 -- # '[' -z 58052 ']' 00:05:47.167 23:39:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.167 23:39:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.167 23:39:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.167 23:39:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.167 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.167 23:39:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.167 23:39:17 -- common/autotest_common.sh@862 -- # return 0 00:05:47.167 23:39:17 -- event/cpu_locks.sh@159 -- # waitforlisten 58070 /var/tmp/spdk2.sock 00:05:47.167 23:39:17 -- common/autotest_common.sh@829 -- # '[' -z 58070 ']' 00:05:47.167 23:39:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.167 23:39:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.167 23:39:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.167 23:39:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.167 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.425 23:39:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.425 23:39:17 -- common/autotest_common.sh@862 -- # return 0 00:05:47.425 23:39:17 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:47.425 23:39:17 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.425 23:39:17 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.425 23:39:17 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.425 00:05:47.425 real 0m3.554s 00:05:47.425 user 0m1.314s 00:05:47.425 sys 0m0.163s 00:05:47.425 23:39:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.425 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.425 ************************************ 00:05:47.425 END TEST locking_overlapped_coremask_via_rpc 00:05:47.425 ************************************ 00:05:47.425 23:39:17 -- event/cpu_locks.sh@174 -- # cleanup 00:05:47.425 23:39:17 -- event/cpu_locks.sh@15 -- # [[ -z 58052 ]] 00:05:47.425 23:39:17 -- event/cpu_locks.sh@15 -- # killprocess 58052 00:05:47.425 23:39:17 -- common/autotest_common.sh@936 -- # '[' -z 58052 ']' 00:05:47.425 23:39:17 -- common/autotest_common.sh@940 -- # kill -0 58052 00:05:47.425 23:39:17 -- common/autotest_common.sh@941 -- # uname 00:05:47.425 23:39:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.425 23:39:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58052 00:05:47.425 23:39:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.425 23:39:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.425 23:39:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58052' 00:05:47.425 killing process with pid 58052 00:05:47.425 23:39:18 -- common/autotest_common.sh@955 -- # kill 58052 00:05:47.425 23:39:18 -- common/autotest_common.sh@960 -- # wait 58052 00:05:48.799 23:39:19 -- event/cpu_locks.sh@16 -- # [[ -z 58070 ]] 00:05:48.799 23:39:19 -- event/cpu_locks.sh@16 -- # killprocess 58070 00:05:48.799 23:39:19 -- common/autotest_common.sh@936 -- # '[' -z 58070 ']' 00:05:48.799 23:39:19 -- common/autotest_common.sh@940 -- # kill -0 58070 00:05:48.799 23:39:19 -- common/autotest_common.sh@941 -- # uname 00:05:48.799 23:39:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.799 23:39:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58070 00:05:48.799 23:39:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:48.799 23:39:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:48.799 killing process with pid 58070 00:05:48.799 23:39:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58070' 00:05:48.799 23:39:19 -- common/autotest_common.sh@955 -- # kill 58070 00:05:48.799 23:39:19 -- common/autotest_common.sh@960 -- # wait 58070 00:05:50.176 23:39:20 -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.176 23:39:20 -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.176 23:39:20 -- event/cpu_locks.sh@15 -- # [[ -z 58052 ]] 00:05:50.176 23:39:20 -- event/cpu_locks.sh@15 -- # killprocess 58052 00:05:50.176 23:39:20 -- common/autotest_common.sh@936 -- # '[' -z 58052 ']' 00:05:50.176 23:39:20 -- common/autotest_common.sh@940 -- # kill -0 58052 00:05:50.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58052) - No such process 00:05:50.177 23:39:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58052 is not found' 00:05:50.177 Process with pid 58052 is not found 00:05:50.177 23:39:20 -- event/cpu_locks.sh@16 -- # [[ -z 58070 ]] 00:05:50.177 23:39:20 -- event/cpu_locks.sh@16 -- # killprocess 58070 00:05:50.177 23:39:20 -- common/autotest_common.sh@936 -- # '[' -z 58070 ']' 00:05:50.177 23:39:20 -- common/autotest_common.sh@940 -- # kill -0 58070 00:05:50.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58070) - No such process 00:05:50.177 Process with pid 58070 is not found 00:05:50.177 23:39:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58070 is not found' 00:05:50.177 23:39:20 -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.177 00:05:50.177 real 0m30.207s 00:05:50.177 user 0m53.246s 00:05:50.177 sys 0m4.277s 00:05:50.177 23:39:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.177 23:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.177 ************************************ 00:05:50.177 END TEST cpu_locks 00:05:50.177 ************************************ 00:05:50.177 00:05:50.177 real 0m57.736s 00:05:50.177 user 1m45.980s 00:05:50.177 sys 0m7.117s 00:05:50.177 23:39:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.177 ************************************ 00:05:50.177 END TEST event 00:05:50.177 ************************************ 00:05:50.177 23:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.177 23:39:20 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.177 23:39:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.177 23:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.177 ************************************ 00:05:50.177 START TEST thread 00:05:50.177 ************************************ 00:05:50.177 23:39:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.177 * Looking for test storage... 00:05:50.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:50.177 23:39:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.177 23:39:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.177 23:39:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.177 23:39:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.177 23:39:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.177 23:39:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.177 23:39:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.177 23:39:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.177 23:39:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.177 23:39:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.177 23:39:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.177 23:39:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.177 23:39:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.177 23:39:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.177 23:39:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.177 23:39:20 -- scripts/common.sh@344 -- # : 1 00:05:50.177 23:39:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.177 23:39:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.177 23:39:20 -- scripts/common.sh@364 -- # decimal 1 00:05:50.177 23:39:20 -- scripts/common.sh@352 -- # local d=1 00:05:50.177 23:39:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.177 23:39:20 -- scripts/common.sh@354 -- # echo 1 00:05:50.177 23:39:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.177 23:39:20 -- scripts/common.sh@365 -- # decimal 2 00:05:50.177 23:39:20 -- scripts/common.sh@352 -- # local d=2 00:05:50.177 23:39:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.177 23:39:20 -- scripts/common.sh@354 -- # echo 2 00:05:50.177 23:39:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.177 23:39:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.177 23:39:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.177 23:39:20 -- scripts/common.sh@367 -- # return 0 00:05:50.177 23:39:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.177 --rc genhtml_branch_coverage=1 00:05:50.177 --rc genhtml_function_coverage=1 00:05:50.177 --rc genhtml_legend=1 00:05:50.177 --rc geninfo_all_blocks=1 00:05:50.177 --rc geninfo_unexecuted_blocks=1 00:05:50.177 00:05:50.177 ' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.177 --rc genhtml_branch_coverage=1 00:05:50.177 --rc genhtml_function_coverage=1 00:05:50.177 --rc genhtml_legend=1 00:05:50.177 --rc geninfo_all_blocks=1 00:05:50.177 --rc geninfo_unexecuted_blocks=1 00:05:50.177 00:05:50.177 ' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.177 --rc genhtml_branch_coverage=1 00:05:50.177 --rc genhtml_function_coverage=1 00:05:50.177 --rc genhtml_legend=1 00:05:50.177 --rc geninfo_all_blocks=1 00:05:50.177 --rc geninfo_unexecuted_blocks=1 00:05:50.177 00:05:50.177 ' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.177 --rc genhtml_branch_coverage=1 00:05:50.177 --rc genhtml_function_coverage=1 00:05:50.177 --rc genhtml_legend=1 00:05:50.177 --rc geninfo_all_blocks=1 00:05:50.177 --rc geninfo_unexecuted_blocks=1 00:05:50.177 00:05:50.177 ' 00:05:50.177 23:39:20 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.177 23:39:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:50.177 23:39:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.177 23:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.177 ************************************ 00:05:50.177 START TEST thread_poller_perf 00:05:50.177 ************************************ 00:05:50.177 23:39:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.177 [2024-12-13 23:39:20.732542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.177 [2024-12-13 23:39:20.732646] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:05:50.177 [2024-12-13 23:39:20.880762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.438 [2024-12-13 23:39:21.054554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.438 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:51.817 [2024-12-13T23:39:22.549Z] ====================================== 00:05:51.817 [2024-12-13T23:39:22.549Z] busy:2610741892 (cyc) 00:05:51.817 [2024-12-13T23:39:22.549Z] total_run_count: 293000 00:05:51.817 [2024-12-13T23:39:22.549Z] tsc_hz: 2600000000 (cyc) 00:05:51.817 [2024-12-13T23:39:22.549Z] ====================================== 00:05:51.817 [2024-12-13T23:39:22.549Z] poller_cost: 8910 (cyc), 3426 (nsec) 00:05:51.817 00:05:51.817 real 0m1.617s 00:05:51.817 user 0m1.435s 00:05:51.817 sys 0m0.073s 00:05:51.817 23:39:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.817 23:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:51.817 ************************************ 00:05:51.817 END TEST thread_poller_perf 00:05:51.817 ************************************ 00:05:51.817 23:39:22 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.817 23:39:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:51.817 23:39:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.817 23:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:51.818 ************************************ 00:05:51.818 START TEST thread_poller_perf 00:05:51.818 ************************************ 00:05:51.818 23:39:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.818 [2024-12-13 23:39:22.399287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.818 [2024-12-13 23:39:22.399405] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:05:52.079 [2024-12-13 23:39:22.547837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.079 [2024-12-13 23:39:22.737517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.079 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:53.463 [2024-12-13T23:39:24.195Z] ====================================== 00:05:53.463 [2024-12-13T23:39:24.195Z] busy:2604481952 (cyc) 00:05:53.463 [2024-12-13T23:39:24.195Z] total_run_count: 3948000 00:05:53.463 [2024-12-13T23:39:24.195Z] tsc_hz: 2600000000 (cyc) 00:05:53.463 [2024-12-13T23:39:24.195Z] ====================================== 00:05:53.463 [2024-12-13T23:39:24.195Z] poller_cost: 659 (cyc), 253 (nsec) 00:05:53.463 00:05:53.463 real 0m1.622s 00:05:53.463 user 0m1.435s 00:05:53.463 sys 0m0.078s 00:05:53.463 23:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.463 ************************************ 00:05:53.463 END TEST thread_poller_perf 00:05:53.463 ************************************ 00:05:53.463 23:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.463 23:39:24 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.463 ************************************ 00:05:53.463 END TEST thread 00:05:53.463 ************************************ 00:05:53.463 00:05:53.463 real 0m3.460s 00:05:53.463 user 0m2.970s 00:05:53.463 sys 0m0.269s 00:05:53.463 23:39:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.463 23:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.463 23:39:24 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:53.463 23:39:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.463 23:39:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.463 23:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.463 ************************************ 00:05:53.463 START TEST accel 00:05:53.463 ************************************ 00:05:53.463 23:39:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:53.463 * Looking for test storage... 00:05:53.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:53.463 23:39:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.463 23:39:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.463 23:39:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.463 23:39:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.463 23:39:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.463 23:39:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.463 23:39:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.464 23:39:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.464 23:39:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.464 23:39:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.464 23:39:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.464 23:39:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.464 23:39:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.464 23:39:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.464 23:39:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.464 23:39:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.464 23:39:24 -- scripts/common.sh@344 -- # : 1 00:05:53.464 23:39:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.464 23:39:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.464 23:39:24 -- scripts/common.sh@364 -- # decimal 1 00:05:53.464 23:39:24 -- scripts/common.sh@352 -- # local d=1 00:05:53.464 23:39:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.464 23:39:24 -- scripts/common.sh@354 -- # echo 1 00:05:53.464 23:39:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.464 23:39:24 -- scripts/common.sh@365 -- # decimal 2 00:05:53.746 23:39:24 -- scripts/common.sh@352 -- # local d=2 00:05:53.746 23:39:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.746 23:39:24 -- scripts/common.sh@354 -- # echo 2 00:05:53.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.746 23:39:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.746 23:39:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.746 23:39:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.746 23:39:24 -- scripts/common.sh@367 -- # return 0 00:05:53.746 23:39:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.746 23:39:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.746 --rc genhtml_branch_coverage=1 00:05:53.746 --rc genhtml_function_coverage=1 00:05:53.746 --rc genhtml_legend=1 00:05:53.746 --rc geninfo_all_blocks=1 00:05:53.746 --rc geninfo_unexecuted_blocks=1 00:05:53.746 00:05:53.746 ' 00:05:53.746 23:39:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.746 --rc genhtml_branch_coverage=1 00:05:53.746 --rc genhtml_function_coverage=1 00:05:53.746 --rc genhtml_legend=1 00:05:53.746 --rc geninfo_all_blocks=1 00:05:53.746 --rc geninfo_unexecuted_blocks=1 00:05:53.746 00:05:53.746 ' 00:05:53.746 23:39:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.747 --rc genhtml_branch_coverage=1 00:05:53.747 --rc genhtml_function_coverage=1 00:05:53.747 --rc genhtml_legend=1 00:05:53.747 --rc geninfo_all_blocks=1 00:05:53.747 --rc geninfo_unexecuted_blocks=1 00:05:53.747 00:05:53.747 ' 00:05:53.747 23:39:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.747 --rc genhtml_branch_coverage=1 00:05:53.747 --rc genhtml_function_coverage=1 00:05:53.747 --rc genhtml_legend=1 00:05:53.747 --rc geninfo_all_blocks=1 00:05:53.747 --rc geninfo_unexecuted_blocks=1 00:05:53.747 00:05:53.747 ' 00:05:53.747 23:39:24 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:53.747 23:39:24 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:53.747 23:39:24 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.747 23:39:24 -- accel/accel.sh@59 -- # spdk_tgt_pid=58352 00:05:53.747 23:39:24 -- accel/accel.sh@60 -- # waitforlisten 58352 00:05:53.747 23:39:24 -- common/autotest_common.sh@829 -- # '[' -z 58352 ']' 00:05:53.747 23:39:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.747 23:39:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.747 23:39:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.747 23:39:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.747 23:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.747 23:39:24 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:53.747 23:39:24 -- accel/accel.sh@58 -- # build_accel_config 00:05:53.747 23:39:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.747 23:39:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.747 23:39:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.747 23:39:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.747 23:39:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.747 23:39:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.747 23:39:24 -- accel/accel.sh@42 -- # jq -r . 00:05:53.747 [2024-12-13 23:39:24.264627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.747 [2024-12-13 23:39:24.264857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58352 ] 00:05:53.747 [2024-12-13 23:39:24.413318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.005 [2024-12-13 23:39:24.562974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.005 [2024-12-13 23:39:24.563134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.582 23:39:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.582 23:39:25 -- common/autotest_common.sh@862 -- # return 0 00:05:54.582 23:39:25 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:54.582 23:39:25 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:54.582 23:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.582 23:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.582 23:39:25 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:54.582 23:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # IFS== 00:05:54.582 23:39:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.582 23:39:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.582 23:39:25 -- accel/accel.sh@67 -- # killprocess 58352 00:05:54.582 23:39:25 -- common/autotest_common.sh@936 -- # '[' -z 58352 ']' 00:05:54.582 23:39:25 -- common/autotest_common.sh@940 -- # kill -0 58352 00:05:54.582 23:39:25 -- common/autotest_common.sh@941 -- # uname 00:05:54.582 23:39:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.582 23:39:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58352 00:05:54.582 killing process with pid 58352 00:05:54.582 23:39:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.582 23:39:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.582 23:39:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58352' 00:05:54.582 23:39:25 -- common/autotest_common.sh@955 -- # kill 58352 00:05:54.582 23:39:25 -- common/autotest_common.sh@960 -- # wait 58352 00:05:55.956 23:39:26 -- accel/accel.sh@68 -- # trap - ERR 00:05:55.956 23:39:26 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:55.956 23:39:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:55.956 23:39:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.956 23:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.956 23:39:26 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:55.956 23:39:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:55.956 23:39:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.956 23:39:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.956 23:39:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.956 23:39:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.956 23:39:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.956 23:39:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.956 23:39:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.956 23:39:26 -- accel/accel.sh@42 -- # jq -r . 00:05:55.956 23:39:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.956 23:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.956 23:39:26 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:55.956 23:39:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:55.956 23:39:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.956 23:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.956 ************************************ 00:05:55.956 START TEST accel_missing_filename 00:05:55.956 ************************************ 00:05:55.956 23:39:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:55.956 23:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.956 23:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:55.956 23:39:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:55.956 23:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.956 23:39:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:55.956 23:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.956 23:39:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:55.956 23:39:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:55.956 23:39:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.956 23:39:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.956 23:39:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.956 23:39:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.956 23:39:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.956 23:39:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.956 23:39:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.956 23:39:26 -- accel/accel.sh@42 -- # jq -r . 00:05:55.956 [2024-12-13 23:39:26.424375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.956 [2024-12-13 23:39:26.424596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58417 ] 00:05:55.956 [2024-12-13 23:39:26.571374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.215 [2024-12-13 23:39:26.719700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.215 [2024-12-13 23:39:26.830371] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.472 [2024-12-13 23:39:27.082330] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:56.733 A filename is required. 00:05:56.733 23:39:27 -- common/autotest_common.sh@653 -- # es=234 00:05:56.733 23:39:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.733 23:39:27 -- common/autotest_common.sh@662 -- # es=106 00:05:56.733 23:39:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.733 23:39:27 -- common/autotest_common.sh@670 -- # es=1 00:05:56.733 23:39:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.733 00:05:56.733 real 0m0.905s 00:05:56.733 user 0m0.703s 00:05:56.733 sys 0m0.123s 00:05:56.733 23:39:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.733 23:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.733 ************************************ 00:05:56.733 END TEST accel_missing_filename 00:05:56.733 ************************************ 00:05:56.733 23:39:27 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.733 23:39:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:56.733 23:39:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.733 23:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.733 ************************************ 00:05:56.733 START TEST accel_compress_verify 00:05:56.733 ************************************ 00:05:56.733 23:39:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.733 23:39:27 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.733 23:39:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.733 23:39:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:56.733 23:39:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.733 23:39:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:56.733 23:39:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.733 23:39:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.733 23:39:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.733 23:39:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.733 23:39:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.733 23:39:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.733 23:39:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.733 23:39:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.733 23:39:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.733 23:39:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.733 23:39:27 -- accel/accel.sh@42 -- # jq -r . 00:05:56.733 [2024-12-13 23:39:27.384238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.733 [2024-12-13 23:39:27.384342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58442 ] 00:05:56.992 [2024-12-13 23:39:27.532467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.992 [2024-12-13 23:39:27.678631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.250 [2024-12-13 23:39:27.789237] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.508 [2024-12-13 23:39:28.048539] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:57.769 00:05:57.769 Compression does not support the verify option, aborting. 00:05:57.769 23:39:28 -- common/autotest_common.sh@653 -- # es=161 00:05:57.769 23:39:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.769 23:39:28 -- common/autotest_common.sh@662 -- # es=33 00:05:57.769 ************************************ 00:05:57.769 END TEST accel_compress_verify 00:05:57.769 ************************************ 00:05:57.769 23:39:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:57.769 23:39:28 -- common/autotest_common.sh@670 -- # es=1 00:05:57.769 23:39:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.769 00:05:57.769 real 0m0.910s 00:05:57.769 user 0m0.718s 00:05:57.769 sys 0m0.118s 00:05:57.769 23:39:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.769 23:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.769 23:39:28 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:57.769 23:39:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:57.769 23:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.769 23:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.769 ************************************ 00:05:57.769 START TEST accel_wrong_workload 00:05:57.769 ************************************ 00:05:57.769 23:39:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:57.769 23:39:28 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.769 23:39:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:57.769 23:39:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:57.769 23:39:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.769 23:39:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:57.769 23:39:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.769 23:39:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:57.769 23:39:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:57.769 23:39:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.769 23:39:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.769 23:39:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.769 23:39:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.769 23:39:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.769 23:39:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.769 23:39:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.769 23:39:28 -- accel/accel.sh@42 -- # jq -r . 00:05:57.769 Unsupported workload type: foobar 00:05:57.769 [2024-12-13 23:39:28.355327] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:57.769 accel_perf options: 00:05:57.769 [-h help message] 00:05:57.769 [-q queue depth per core] 00:05:57.769 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.769 [-T number of threads per core 00:05:57.769 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.769 [-t time in seconds] 00:05:57.769 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.769 [ dif_verify, , dif_generate, dif_generate_copy 00:05:57.770 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.770 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.770 [-S for crc32c workload, use this seed value (default 0) 00:05:57.770 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.770 [-f for fill workload, use this BYTE value (default 255) 00:05:57.770 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.770 [-y verify result if this switch is on] 00:05:57.770 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.770 Can be used to spread operations across a wider range of memory. 00:05:57.770 23:39:28 -- common/autotest_common.sh@653 -- # es=1 00:05:57.770 23:39:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.770 23:39:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.770 23:39:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.770 00:05:57.770 real 0m0.062s 00:05:57.770 user 0m0.047s 00:05:57.770 sys 0m0.038s 00:05:57.770 23:39:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.770 23:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.770 ************************************ 00:05:57.770 END TEST accel_wrong_workload 00:05:57.770 ************************************ 00:05:57.770 23:39:28 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.770 23:39:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:57.770 23:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.770 23:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.770 ************************************ 00:05:57.770 START TEST accel_negative_buffers 00:05:57.770 ************************************ 00:05:57.770 23:39:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.770 23:39:28 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.770 23:39:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:57.770 23:39:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:57.770 23:39:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.770 23:39:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:57.770 23:39:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.770 23:39:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:57.770 23:39:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:57.770 23:39:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.770 23:39:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.770 23:39:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.770 23:39:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.770 23:39:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.770 23:39:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.770 23:39:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.770 23:39:28 -- accel/accel.sh@42 -- # jq -r . 00:05:57.770 -x option must be non-negative. 00:05:57.770 [2024-12-13 23:39:28.475293] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:57.770 accel_perf options: 00:05:57.770 [-h help message] 00:05:57.770 [-q queue depth per core] 00:05:57.770 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.770 [-T number of threads per core 00:05:57.770 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.770 [-t time in seconds] 00:05:57.770 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.770 [ dif_verify, , dif_generate, dif_generate_copy 00:05:57.770 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.770 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.770 [-S for crc32c workload, use this seed value (default 0) 00:05:57.770 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.770 [-f for fill workload, use this BYTE value (default 255) 00:05:57.770 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.770 [-y verify result if this switch is on] 00:05:57.770 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.770 Can be used to spread operations across a wider range of memory. 00:05:57.770 ************************************ 00:05:57.770 END TEST accel_negative_buffers 00:05:57.770 ************************************ 00:05:57.770 23:39:28 -- common/autotest_common.sh@653 -- # es=1 00:05:57.770 23:39:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.770 23:39:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.770 23:39:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.770 00:05:57.770 real 0m0.058s 00:05:57.770 user 0m0.052s 00:05:57.770 sys 0m0.033s 00:05:57.770 23:39:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.770 23:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.029 23:39:28 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:58.029 23:39:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:58.029 23:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.029 23:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.029 ************************************ 00:05:58.029 START TEST accel_crc32c 00:05:58.029 ************************************ 00:05:58.029 23:39:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:58.029 23:39:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.029 23:39:28 -- accel/accel.sh@17 -- # local accel_module 00:05:58.029 23:39:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:58.029 23:39:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:58.029 23:39:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.029 23:39:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.029 23:39:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.029 23:39:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.029 23:39:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.029 23:39:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.029 23:39:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.029 23:39:28 -- accel/accel.sh@42 -- # jq -r . 00:05:58.029 [2024-12-13 23:39:28.581588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.029 [2024-12-13 23:39:28.581688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58515 ] 00:05:58.029 [2024-12-13 23:39:28.728927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.288 [2024-12-13 23:39:28.875478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.187 23:39:30 -- accel/accel.sh@18 -- # out=' 00:06:00.187 SPDK Configuration: 00:06:00.187 Core mask: 0x1 00:06:00.188 00:06:00.188 Accel Perf Configuration: 00:06:00.188 Workload Type: crc32c 00:06:00.188 CRC-32C seed: 32 00:06:00.188 Transfer size: 4096 bytes 00:06:00.188 Vector count 1 00:06:00.188 Module: software 00:06:00.188 Queue depth: 32 00:06:00.188 Allocate depth: 32 00:06:00.188 # threads/core: 1 00:06:00.188 Run time: 1 seconds 00:06:00.188 Verify: Yes 00:06:00.188 00:06:00.188 Running for 1 seconds... 00:06:00.188 00:06:00.188 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.188 ------------------------------------------------------------------------------------ 00:06:00.188 0,0 597600/s 2334 MiB/s 0 0 00:06:00.188 ==================================================================================== 00:06:00.188 Total 597600/s 2334 MiB/s 0 0' 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:00.188 23:39:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.188 23:39:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:00.188 23:39:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.188 23:39:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.188 23:39:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.188 23:39:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.188 23:39:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.188 23:39:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.188 23:39:30 -- accel/accel.sh@42 -- # jq -r . 00:06:00.188 [2024-12-13 23:39:30.494432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.188 [2024-12-13 23:39:30.494550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58541 ] 00:06:00.188 [2024-12-13 23:39:30.642074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.188 [2024-12-13 23:39:30.799235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val=0x1 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val=crc32c 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val=32 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.188 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.188 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.188 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val=software 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val=32 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val=32 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val=1 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val=Yes 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.446 23:39:30 -- accel/accel.sh@21 -- # val= 00:06:00.446 23:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # IFS=: 00:06:00.446 23:39:30 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@21 -- # val= 00:06:01.822 23:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # IFS=: 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@21 -- # val= 00:06:01.822 23:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # IFS=: 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@21 -- # val= 00:06:01.822 23:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # IFS=: 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@21 -- # val= 00:06:01.822 23:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # IFS=: 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@21 -- # val= 00:06:01.822 23:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # IFS=: 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@21 -- # val= 00:06:01.822 23:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # IFS=: 00:06:01.822 23:39:32 -- accel/accel.sh@20 -- # read -r var val 00:06:01.822 23:39:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.822 23:39:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:01.822 23:39:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.822 00:06:01.822 real 0m3.833s 00:06:01.822 user 0m3.411s 00:06:01.822 sys 0m0.216s 00:06:01.822 23:39:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.822 ************************************ 00:06:01.822 END TEST accel_crc32c 00:06:01.822 ************************************ 00:06:01.822 23:39:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.822 23:39:32 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:01.822 23:39:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:01.822 23:39:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.822 23:39:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.822 ************************************ 00:06:01.822 START TEST accel_crc32c_C2 00:06:01.822 ************************************ 00:06:01.822 23:39:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:01.822 23:39:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.822 23:39:32 -- accel/accel.sh@17 -- # local accel_module 00:06:01.822 23:39:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.822 23:39:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.822 23:39:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.822 23:39:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.822 23:39:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.822 23:39:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.822 23:39:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.822 23:39:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.822 23:39:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.822 23:39:32 -- accel/accel.sh@42 -- # jq -r . 00:06:01.822 [2024-12-13 23:39:32.484983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.822 [2024-12-13 23:39:32.485090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58576 ] 00:06:02.081 [2024-12-13 23:39:32.633606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.081 [2024-12-13 23:39:32.791713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.979 23:39:34 -- accel/accel.sh@18 -- # out=' 00:06:03.979 SPDK Configuration: 00:06:03.979 Core mask: 0x1 00:06:03.979 00:06:03.979 Accel Perf Configuration: 00:06:03.979 Workload Type: crc32c 00:06:03.979 CRC-32C seed: 0 00:06:03.979 Transfer size: 4096 bytes 00:06:03.979 Vector count 2 00:06:03.979 Module: software 00:06:03.979 Queue depth: 32 00:06:03.979 Allocate depth: 32 00:06:03.979 # threads/core: 1 00:06:03.979 Run time: 1 seconds 00:06:03.979 Verify: Yes 00:06:03.979 00:06:03.979 Running for 1 seconds... 00:06:03.979 00:06:03.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.979 ------------------------------------------------------------------------------------ 00:06:03.979 0,0 507424/s 3964 MiB/s 0 0 00:06:03.979 ==================================================================================== 00:06:03.979 Total 507424/s 1982 MiB/s 0 0' 00:06:03.979 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.979 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.979 23:39:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:03.979 23:39:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:03.979 23:39:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.979 23:39:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.979 23:39:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.979 23:39:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.979 23:39:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.979 23:39:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.979 23:39:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.979 23:39:34 -- accel/accel.sh@42 -- # jq -r . 00:06:03.979 [2024-12-13 23:39:34.411950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.979 [2024-12-13 23:39:34.412324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58602 ] 00:06:03.979 [2024-12-13 23:39:34.558557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.238 [2024-12-13 23:39:34.710022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=0x1 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=crc32c 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=0 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=software 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=32 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=32 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=1 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val=Yes 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:04.238 23:39:34 -- accel/accel.sh@21 -- # val= 00:06:04.238 23:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # IFS=: 00:06:04.238 23:39:34 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@21 -- # val= 00:06:05.613 23:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # IFS=: 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@21 -- # val= 00:06:05.613 23:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # IFS=: 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@21 -- # val= 00:06:05.613 23:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # IFS=: 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@21 -- # val= 00:06:05.613 23:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # IFS=: 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@21 -- # val= 00:06:05.613 23:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # IFS=: 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@21 -- # val= 00:06:05.613 23:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # IFS=: 00:06:05.613 23:39:36 -- accel/accel.sh@20 -- # read -r var val 00:06:05.613 23:39:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.613 23:39:36 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:05.613 23:39:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.613 00:06:05.613 real 0m3.858s 00:06:05.613 user 0m3.414s 00:06:05.613 sys 0m0.239s 00:06:05.613 23:39:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.613 23:39:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.613 ************************************ 00:06:05.614 END TEST accel_crc32c_C2 00:06:05.614 ************************************ 00:06:05.614 23:39:36 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:05.614 23:39:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.614 23:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.614 23:39:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.875 ************************************ 00:06:05.875 START TEST accel_copy 00:06:05.875 ************************************ 00:06:05.875 23:39:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:05.875 23:39:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.875 23:39:36 -- accel/accel.sh@17 -- # local accel_module 00:06:05.875 23:39:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:05.875 23:39:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:05.875 23:39:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.875 23:39:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.875 23:39:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.875 23:39:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.875 23:39:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.875 23:39:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.875 23:39:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.875 23:39:36 -- accel/accel.sh@42 -- # jq -r . 00:06:05.875 [2024-12-13 23:39:36.380555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.875 [2024-12-13 23:39:36.380954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58643 ] 00:06:05.875 [2024-12-13 23:39:36.527639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.136 [2024-12-13 23:39:36.711503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.044 23:39:38 -- accel/accel.sh@18 -- # out=' 00:06:08.044 SPDK Configuration: 00:06:08.044 Core mask: 0x1 00:06:08.044 00:06:08.044 Accel Perf Configuration: 00:06:08.044 Workload Type: copy 00:06:08.044 Transfer size: 4096 bytes 00:06:08.044 Vector count 1 00:06:08.044 Module: software 00:06:08.044 Queue depth: 32 00:06:08.044 Allocate depth: 32 00:06:08.044 # threads/core: 1 00:06:08.044 Run time: 1 seconds 00:06:08.044 Verify: Yes 00:06:08.044 00:06:08.044 Running for 1 seconds... 00:06:08.044 00:06:08.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.044 ------------------------------------------------------------------------------------ 00:06:08.044 0,0 286816/s 1120 MiB/s 0 0 00:06:08.044 ==================================================================================== 00:06:08.044 Total 286816/s 1120 MiB/s 0 0' 00:06:08.044 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.044 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.044 23:39:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:08.044 23:39:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:08.044 23:39:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.044 23:39:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.044 23:39:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.044 23:39:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.044 23:39:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.044 23:39:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.044 23:39:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.044 23:39:38 -- accel/accel.sh@42 -- # jq -r . 00:06:08.044 [2024-12-13 23:39:38.488934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.044 [2024-12-13 23:39:38.489039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58669 ] 00:06:08.044 [2024-12-13 23:39:38.639199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.304 [2024-12-13 23:39:38.817685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=0x1 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=copy 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=software 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=32 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=32 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=1 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val=Yes 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.304 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.304 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.304 23:39:38 -- accel/accel.sh@21 -- # val= 00:06:08.305 23:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.305 23:39:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.305 23:39:38 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@21 -- # val= 00:06:10.241 23:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # IFS=: 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@21 -- # val= 00:06:10.241 23:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # IFS=: 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@21 -- # val= 00:06:10.241 23:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # IFS=: 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@21 -- # val= 00:06:10.241 23:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # IFS=: 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@21 -- # val= 00:06:10.241 23:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # IFS=: 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@21 -- # val= 00:06:10.241 23:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # IFS=: 00:06:10.241 23:39:40 -- accel/accel.sh@20 -- # read -r var val 00:06:10.241 23:39:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.241 23:39:40 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:10.241 23:39:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.241 00:06:10.241 real 0m4.225s 00:06:10.241 user 0m3.771s 00:06:10.241 sys 0m0.249s 00:06:10.241 23:39:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.241 23:39:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.241 ************************************ 00:06:10.241 END TEST accel_copy 00:06:10.241 ************************************ 00:06:10.241 23:39:40 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.241 23:39:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:10.241 23:39:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.241 23:39:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.241 ************************************ 00:06:10.241 START TEST accel_fill 00:06:10.241 ************************************ 00:06:10.241 23:39:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.241 23:39:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.241 23:39:40 -- accel/accel.sh@17 -- # local accel_module 00:06:10.241 23:39:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.241 23:39:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.241 23:39:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.241 23:39:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.241 23:39:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.241 23:39:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.241 23:39:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.242 23:39:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.242 23:39:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.242 23:39:40 -- accel/accel.sh@42 -- # jq -r . 00:06:10.242 [2024-12-13 23:39:40.658323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.242 [2024-12-13 23:39:40.658586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58716 ] 00:06:10.242 [2024-12-13 23:39:40.807670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.504 [2024-12-13 23:39:40.984647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.419 23:39:42 -- accel/accel.sh@18 -- # out=' 00:06:12.419 SPDK Configuration: 00:06:12.419 Core mask: 0x1 00:06:12.419 00:06:12.419 Accel Perf Configuration: 00:06:12.419 Workload Type: fill 00:06:12.419 Fill pattern: 0x80 00:06:12.419 Transfer size: 4096 bytes 00:06:12.419 Vector count 1 00:06:12.419 Module: software 00:06:12.419 Queue depth: 64 00:06:12.419 Allocate depth: 64 00:06:12.419 # threads/core: 1 00:06:12.419 Run time: 1 seconds 00:06:12.419 Verify: Yes 00:06:12.419 00:06:12.419 Running for 1 seconds... 00:06:12.419 00:06:12.419 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.419 ------------------------------------------------------------------------------------ 00:06:12.419 0,0 456512/s 1783 MiB/s 0 0 00:06:12.419 ==================================================================================== 00:06:12.419 Total 456512/s 1783 MiB/s 0 0' 00:06:12.419 23:39:42 -- accel/accel.sh@20 -- # IFS=: 00:06:12.419 23:39:42 -- accel/accel.sh@20 -- # read -r var val 00:06:12.419 23:39:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.419 23:39:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.419 23:39:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.419 23:39:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.419 23:39:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.419 23:39:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.419 23:39:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.419 23:39:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.419 23:39:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.419 23:39:42 -- accel/accel.sh@42 -- # jq -r . 00:06:12.419 [2024-12-13 23:39:42.763785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.419 [2024-12-13 23:39:42.763896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58742 ] 00:06:12.419 [2024-12-13 23:39:42.912572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.419 [2024-12-13 23:39:43.088188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=0x1 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=fill 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=0x80 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=software 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=64 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=64 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=1 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val=Yes 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:12.680 23:39:43 -- accel/accel.sh@21 -- # val= 00:06:12.680 23:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # IFS=: 00:06:12.680 23:39:43 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@21 -- # val= 00:06:14.591 23:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # IFS=: 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@21 -- # val= 00:06:14.591 23:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # IFS=: 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@21 -- # val= 00:06:14.591 23:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # IFS=: 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@21 -- # val= 00:06:14.591 23:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # IFS=: 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@21 -- # val= 00:06:14.591 23:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # IFS=: 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@21 -- # val= 00:06:14.591 23:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # IFS=: 00:06:14.591 23:39:44 -- accel/accel.sh@20 -- # read -r var val 00:06:14.591 23:39:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.591 23:39:44 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:14.591 23:39:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.591 00:06:14.591 real 0m4.208s 00:06:14.591 user 0m3.749s 00:06:14.591 sys 0m0.248s 00:06:14.591 23:39:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.591 ************************************ 00:06:14.591 END TEST accel_fill 00:06:14.591 ************************************ 00:06:14.591 23:39:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.591 23:39:44 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:14.591 23:39:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.591 23:39:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.591 23:39:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.591 ************************************ 00:06:14.591 START TEST accel_copy_crc32c 00:06:14.591 ************************************ 00:06:14.591 23:39:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:14.591 23:39:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.591 23:39:44 -- accel/accel.sh@17 -- # local accel_module 00:06:14.591 23:39:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:14.591 23:39:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:14.591 23:39:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.591 23:39:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.591 23:39:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.591 23:39:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.591 23:39:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.591 23:39:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.591 23:39:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.591 23:39:44 -- accel/accel.sh@42 -- # jq -r . 00:06:14.591 [2024-12-13 23:39:44.924079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.592 [2024-12-13 23:39:44.924699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58783 ] 00:06:14.592 [2024-12-13 23:39:45.073647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.592 [2024-12-13 23:39:45.212831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.493 23:39:46 -- accel/accel.sh@18 -- # out=' 00:06:16.493 SPDK Configuration: 00:06:16.493 Core mask: 0x1 00:06:16.493 00:06:16.493 Accel Perf Configuration: 00:06:16.493 Workload Type: copy_crc32c 00:06:16.493 CRC-32C seed: 0 00:06:16.493 Vector size: 4096 bytes 00:06:16.493 Transfer size: 4096 bytes 00:06:16.493 Vector count 1 00:06:16.493 Module: software 00:06:16.493 Queue depth: 32 00:06:16.493 Allocate depth: 32 00:06:16.493 # threads/core: 1 00:06:16.493 Run time: 1 seconds 00:06:16.493 Verify: Yes 00:06:16.493 00:06:16.493 Running for 1 seconds... 00:06:16.493 00:06:16.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.493 ------------------------------------------------------------------------------------ 00:06:16.493 0,0 309472/s 1208 MiB/s 0 0 00:06:16.493 ==================================================================================== 00:06:16.493 Total 309472/s 1208 MiB/s 0 0' 00:06:16.493 23:39:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:16.493 23:39:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:16.493 23:39:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.493 23:39:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.493 23:39:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.493 23:39:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.493 23:39:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.493 23:39:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.493 23:39:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.493 23:39:46 -- accel/accel.sh@42 -- # jq -r . 00:06:16.493 [2024-12-13 23:39:46.829443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.493 [2024-12-13 23:39:46.829538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58809 ] 00:06:16.493 [2024-12-13 23:39:46.964108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.493 [2024-12-13 23:39:47.103587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val=0x1 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val=0 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 23:39:47 -- accel/accel.sh@21 -- # val=software 00:06:16.493 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 23:39:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val=32 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val=32 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val=1 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val=Yes 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:16.752 23:39:47 -- accel/accel.sh@21 -- # val= 00:06:16.752 23:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # IFS=: 00:06:16.752 23:39:47 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@21 -- # val= 00:06:18.125 23:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@21 -- # val= 00:06:18.125 23:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@21 -- # val= 00:06:18.125 23:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@21 -- # val= 00:06:18.125 23:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@21 -- # val= 00:06:18.125 23:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@21 -- # val= 00:06:18.125 23:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.125 23:39:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.125 23:39:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.125 23:39:48 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:18.125 23:39:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.125 00:06:18.125 real 0m3.803s 00:06:18.125 user 0m3.370s 00:06:18.125 sys 0m0.229s 00:06:18.125 23:39:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.125 23:39:48 -- common/autotest_common.sh@10 -- # set +x 00:06:18.125 ************************************ 00:06:18.125 END TEST accel_copy_crc32c 00:06:18.125 ************************************ 00:06:18.125 23:39:48 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.125 23:39:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.125 23:39:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.125 23:39:48 -- common/autotest_common.sh@10 -- # set +x 00:06:18.125 ************************************ 00:06:18.125 START TEST accel_copy_crc32c_C2 00:06:18.125 ************************************ 00:06:18.125 23:39:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.125 23:39:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.125 23:39:48 -- accel/accel.sh@17 -- # local accel_module 00:06:18.126 23:39:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.126 23:39:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.126 23:39:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.126 23:39:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.126 23:39:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.126 23:39:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.126 23:39:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.126 23:39:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.126 23:39:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.126 23:39:48 -- accel/accel.sh@42 -- # jq -r . 00:06:18.126 [2024-12-13 23:39:48.764157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.126 [2024-12-13 23:39:48.764259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:06:18.384 [2024-12-13 23:39:48.912183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.384 [2024-12-13 23:39:49.087381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.316 23:39:50 -- accel/accel.sh@18 -- # out=' 00:06:20.316 SPDK Configuration: 00:06:20.316 Core mask: 0x1 00:06:20.316 00:06:20.316 Accel Perf Configuration: 00:06:20.316 Workload Type: copy_crc32c 00:06:20.316 CRC-32C seed: 0 00:06:20.316 Vector size: 4096 bytes 00:06:20.316 Transfer size: 8192 bytes 00:06:20.316 Vector count 2 00:06:20.316 Module: software 00:06:20.316 Queue depth: 32 00:06:20.316 Allocate depth: 32 00:06:20.316 # threads/core: 1 00:06:20.316 Run time: 1 seconds 00:06:20.316 Verify: Yes 00:06:20.316 00:06:20.316 Running for 1 seconds... 00:06:20.316 00:06:20.316 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.316 ------------------------------------------------------------------------------------ 00:06:20.316 0,0 178560/s 1395 MiB/s 0 0 00:06:20.316 ==================================================================================== 00:06:20.316 Total 178560/s 697 MiB/s 0 0' 00:06:20.316 23:39:50 -- accel/accel.sh@20 -- # IFS=: 00:06:20.316 23:39:50 -- accel/accel.sh@20 -- # read -r var val 00:06:20.316 23:39:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:20.316 23:39:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:20.316 23:39:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.316 23:39:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.316 23:39:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.316 23:39:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.316 23:39:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.316 23:39:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.316 23:39:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.316 23:39:50 -- accel/accel.sh@42 -- # jq -r . 00:06:20.316 [2024-12-13 23:39:50.799079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.316 [2024-12-13 23:39:50.799215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58870 ] 00:06:20.316 [2024-12-13 23:39:50.948326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.575 [2024-12-13 23:39:51.137206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=0x1 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=0 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=software 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=32 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=32 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=1 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val=Yes 00:06:20.575 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.575 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.575 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.576 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.576 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.576 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.576 23:39:51 -- accel/accel.sh@21 -- # val= 00:06:20.576 23:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.576 23:39:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.576 23:39:51 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@21 -- # val= 00:06:22.497 23:39:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@21 -- # val= 00:06:22.497 23:39:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@21 -- # val= 00:06:22.497 23:39:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@21 -- # val= 00:06:22.497 23:39:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@21 -- # val= 00:06:22.497 23:39:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@21 -- # val= 00:06:22.497 23:39:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.497 23:39:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.497 23:39:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.497 23:39:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:22.497 23:39:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.497 00:06:22.497 real 0m4.087s 00:06:22.497 user 0m3.639s 00:06:22.497 sys 0m0.241s 00:06:22.497 23:39:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.497 23:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.497 ************************************ 00:06:22.497 END TEST accel_copy_crc32c_C2 00:06:22.497 ************************************ 00:06:22.497 23:39:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:22.497 23:39:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.497 23:39:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.497 23:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.497 ************************************ 00:06:22.497 START TEST accel_dualcast 00:06:22.497 ************************************ 00:06:22.497 23:39:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:22.497 23:39:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.497 23:39:52 -- accel/accel.sh@17 -- # local accel_module 00:06:22.497 23:39:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:22.497 23:39:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:22.497 23:39:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.497 23:39:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.497 23:39:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.497 23:39:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.497 23:39:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.497 23:39:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.497 23:39:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.497 23:39:52 -- accel/accel.sh@42 -- # jq -r . 00:06:22.497 [2024-12-13 23:39:52.885085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.497 [2024-12-13 23:39:52.885168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:06:22.497 [2024-12-13 23:39:53.028553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.497 [2024-12-13 23:39:53.203019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.397 23:39:54 -- accel/accel.sh@18 -- # out=' 00:06:24.397 SPDK Configuration: 00:06:24.397 Core mask: 0x1 00:06:24.397 00:06:24.397 Accel Perf Configuration: 00:06:24.397 Workload Type: dualcast 00:06:24.397 Transfer size: 4096 bytes 00:06:24.397 Vector count 1 00:06:24.397 Module: software 00:06:24.397 Queue depth: 32 00:06:24.397 Allocate depth: 32 00:06:24.397 # threads/core: 1 00:06:24.397 Run time: 1 seconds 00:06:24.397 Verify: Yes 00:06:24.397 00:06:24.397 Running for 1 seconds... 00:06:24.397 00:06:24.397 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.397 ------------------------------------------------------------------------------------ 00:06:24.397 0,0 336000/s 1312 MiB/s 0 0 00:06:24.397 ==================================================================================== 00:06:24.397 Total 336000/s 1312 MiB/s 0 0' 00:06:24.397 23:39:54 -- accel/accel.sh@20 -- # IFS=: 00:06:24.397 23:39:54 -- accel/accel.sh@20 -- # read -r var val 00:06:24.397 23:39:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.397 23:39:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.397 23:39:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.397 23:39:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.397 23:39:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.397 23:39:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.397 23:39:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.397 23:39:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.397 23:39:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.397 23:39:54 -- accel/accel.sh@42 -- # jq -r . 00:06:24.397 [2024-12-13 23:39:54.936945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.397 [2024-12-13 23:39:54.937042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:06:24.397 [2024-12-13 23:39:55.084985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.655 [2024-12-13 23:39:55.231576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=0x1 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=dualcast 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=software 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=32 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=32 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=1 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val=Yes 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.655 23:39:55 -- accel/accel.sh@21 -- # val= 00:06:24.655 23:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.655 23:39:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@21 -- # val= 00:06:26.555 23:39:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@21 -- # val= 00:06:26.555 23:39:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@21 -- # val= 00:06:26.555 23:39:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@21 -- # val= 00:06:26.555 23:39:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@21 -- # val= 00:06:26.555 23:39:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@21 -- # val= 00:06:26.555 23:39:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.555 23:39:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.555 23:39:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.555 23:39:56 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:26.555 23:39:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.555 00:06:26.555 real 0m3.975s 00:06:26.555 user 0m3.541s 00:06:26.555 sys 0m0.232s 00:06:26.555 ************************************ 00:06:26.555 END TEST accel_dualcast 00:06:26.555 ************************************ 00:06:26.555 23:39:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.555 23:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.555 23:39:56 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:26.555 23:39:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:26.555 23:39:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.555 23:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.555 ************************************ 00:06:26.555 START TEST accel_compare 00:06:26.555 ************************************ 00:06:26.555 23:39:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:26.555 23:39:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.555 23:39:56 -- accel/accel.sh@17 -- # local accel_module 00:06:26.555 23:39:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:26.555 23:39:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.555 23:39:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.555 23:39:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.555 23:39:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.555 23:39:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.555 23:39:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.555 23:39:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.555 23:39:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.555 23:39:56 -- accel/accel.sh@42 -- # jq -r . 00:06:26.555 [2024-12-13 23:39:56.900156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.555 [2024-12-13 23:39:56.900255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:06:26.555 [2024-12-13 23:39:57.046807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.555 [2024-12-13 23:39:57.219931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.456 23:39:58 -- accel/accel.sh@18 -- # out=' 00:06:28.456 SPDK Configuration: 00:06:28.456 Core mask: 0x1 00:06:28.456 00:06:28.456 Accel Perf Configuration: 00:06:28.456 Workload Type: compare 00:06:28.456 Transfer size: 4096 bytes 00:06:28.456 Vector count 1 00:06:28.456 Module: software 00:06:28.456 Queue depth: 32 00:06:28.456 Allocate depth: 32 00:06:28.456 # threads/core: 1 00:06:28.456 Run time: 1 seconds 00:06:28.456 Verify: Yes 00:06:28.456 00:06:28.456 Running for 1 seconds... 00:06:28.456 00:06:28.456 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.456 ------------------------------------------------------------------------------------ 00:06:28.456 0,0 429952/s 1679 MiB/s 0 0 00:06:28.456 ==================================================================================== 00:06:28.456 Total 429952/s 1679 MiB/s 0 0' 00:06:28.456 23:39:58 -- accel/accel.sh@20 -- # IFS=: 00:06:28.457 23:39:58 -- accel/accel.sh@20 -- # read -r var val 00:06:28.457 23:39:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:28.457 23:39:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.457 23:39:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.457 23:39:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.457 23:39:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:28.457 23:39:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.457 23:39:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.457 23:39:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.457 23:39:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.457 23:39:58 -- accel/accel.sh@42 -- # jq -r . 00:06:28.457 [2024-12-13 23:39:58.976540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.457 [2024-12-13 23:39:58.976639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59010 ] 00:06:28.457 [2024-12-13 23:39:59.125202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.715 [2024-12-13 23:39:59.270216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.715 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.715 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.715 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.715 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.715 23:39:59 -- accel/accel.sh@21 -- # val=0x1 00:06:28.715 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.715 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.715 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.715 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.715 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.715 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.715 23:39:59 -- accel/accel.sh@21 -- # val=compare 00:06:28.715 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.715 23:39:59 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val=software 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val=32 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val=32 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val=1 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val=Yes 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:28.716 23:39:59 -- accel/accel.sh@21 -- # val= 00:06:28.716 23:39:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # IFS=: 00:06:28.716 23:39:59 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@21 -- # val= 00:06:30.617 23:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@21 -- # val= 00:06:30.617 23:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@21 -- # val= 00:06:30.617 23:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@21 -- # val= 00:06:30.617 23:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@21 -- # val= 00:06:30.617 23:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@21 -- # val= 00:06:30.617 23:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.617 23:40:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.617 23:40:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.617 23:40:00 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:30.617 23:40:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.617 00:06:30.617 real 0m4.006s 00:06:30.617 user 0m3.569s 00:06:30.617 sys 0m0.232s 00:06:30.617 23:40:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.617 23:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.617 ************************************ 00:06:30.617 END TEST accel_compare 00:06:30.617 ************************************ 00:06:30.617 23:40:00 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:30.617 23:40:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:30.617 23:40:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.617 23:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.617 ************************************ 00:06:30.617 START TEST accel_xor 00:06:30.617 ************************************ 00:06:30.617 23:40:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:30.617 23:40:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.618 23:40:00 -- accel/accel.sh@17 -- # local accel_module 00:06:30.618 23:40:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:30.618 23:40:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:30.618 23:40:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.618 23:40:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.618 23:40:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.618 23:40:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.618 23:40:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.618 23:40:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.618 23:40:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.618 23:40:00 -- accel/accel.sh@42 -- # jq -r . 00:06:30.618 [2024-12-13 23:40:00.945102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.618 [2024-12-13 23:40:00.945183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59045 ] 00:06:30.618 [2024-12-13 23:40:01.085298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.618 [2024-12-13 23:40:01.234822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.519 23:40:02 -- accel/accel.sh@18 -- # out=' 00:06:32.519 SPDK Configuration: 00:06:32.519 Core mask: 0x1 00:06:32.519 00:06:32.519 Accel Perf Configuration: 00:06:32.519 Workload Type: xor 00:06:32.519 Source buffers: 2 00:06:32.519 Transfer size: 4096 bytes 00:06:32.519 Vector count 1 00:06:32.519 Module: software 00:06:32.519 Queue depth: 32 00:06:32.519 Allocate depth: 32 00:06:32.519 # threads/core: 1 00:06:32.519 Run time: 1 seconds 00:06:32.519 Verify: Yes 00:06:32.519 00:06:32.519 Running for 1 seconds... 00:06:32.519 00:06:32.519 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.519 ------------------------------------------------------------------------------------ 00:06:32.519 0,0 432800/s 1690 MiB/s 0 0 00:06:32.519 ==================================================================================== 00:06:32.519 Total 432800/s 1690 MiB/s 0 0' 00:06:32.519 23:40:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.519 23:40:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.519 23:40:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:32.519 23:40:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:32.519 23:40:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.519 23:40:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.519 23:40:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.520 23:40:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.520 23:40:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.520 23:40:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.520 23:40:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.520 23:40:02 -- accel/accel.sh@42 -- # jq -r . 00:06:32.520 [2024-12-13 23:40:02.865724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.520 [2024-12-13 23:40:02.865822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:06:32.520 [2024-12-13 23:40:03.011819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.520 [2024-12-13 23:40:03.153314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=0x1 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=xor 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=2 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=software 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=32 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=32 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=1 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val=Yes 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.778 23:40:03 -- accel/accel.sh@21 -- # val= 00:06:32.778 23:40:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.778 23:40:03 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@21 -- # val= 00:06:34.154 23:40:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@21 -- # val= 00:06:34.154 23:40:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@21 -- # val= 00:06:34.154 23:40:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@21 -- # val= 00:06:34.154 23:40:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@21 -- # val= 00:06:34.154 23:40:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@21 -- # val= 00:06:34.154 23:40:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.154 23:40:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.154 23:40:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.154 23:40:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:34.154 23:40:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.154 00:06:34.154 real 0m3.816s 00:06:34.154 user 0m3.383s 00:06:34.154 sys 0m0.232s 00:06:34.154 23:40:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.154 23:40:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.154 ************************************ 00:06:34.154 END TEST accel_xor 00:06:34.154 ************************************ 00:06:34.154 23:40:04 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:34.154 23:40:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.154 23:40:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.154 23:40:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.154 ************************************ 00:06:34.154 START TEST accel_xor 00:06:34.154 ************************************ 00:06:34.154 23:40:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:34.154 23:40:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.154 23:40:04 -- accel/accel.sh@17 -- # local accel_module 00:06:34.154 23:40:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:34.154 23:40:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:34.154 23:40:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.154 23:40:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.154 23:40:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.154 23:40:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.154 23:40:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.154 23:40:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.154 23:40:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.154 23:40:04 -- accel/accel.sh@42 -- # jq -r . 00:06:34.154 [2024-12-13 23:40:04.801971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.154 [2024-12-13 23:40:04.802168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:06:34.412 [2024-12-13 23:40:04.949367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.412 [2024-12-13 23:40:05.099421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.313 23:40:06 -- accel/accel.sh@18 -- # out=' 00:06:36.313 SPDK Configuration: 00:06:36.313 Core mask: 0x1 00:06:36.313 00:06:36.313 Accel Perf Configuration: 00:06:36.313 Workload Type: xor 00:06:36.313 Source buffers: 3 00:06:36.313 Transfer size: 4096 bytes 00:06:36.313 Vector count 1 00:06:36.313 Module: software 00:06:36.313 Queue depth: 32 00:06:36.313 Allocate depth: 32 00:06:36.313 # threads/core: 1 00:06:36.313 Run time: 1 seconds 00:06:36.313 Verify: Yes 00:06:36.313 00:06:36.313 Running for 1 seconds... 00:06:36.313 00:06:36.313 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.313 ------------------------------------------------------------------------------------ 00:06:36.313 0,0 422400/s 1650 MiB/s 0 0 00:06:36.313 ==================================================================================== 00:06:36.313 Total 422400/s 1650 MiB/s 0 0' 00:06:36.313 23:40:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.313 23:40:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:36.313 23:40:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.313 23:40:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:36.313 23:40:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.313 23:40:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.313 23:40:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.313 23:40:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.313 23:40:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.313 23:40:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.313 23:40:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.313 23:40:06 -- accel/accel.sh@42 -- # jq -r . 00:06:36.313 [2024-12-13 23:40:06.715999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.313 [2024-12-13 23:40:06.716083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:06:36.313 [2024-12-13 23:40:06.852185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.313 [2024-12-13 23:40:07.004313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=0x1 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=xor 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=3 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=software 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=32 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=32 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=1 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val=Yes 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.572 23:40:07 -- accel/accel.sh@21 -- # val= 00:06:36.572 23:40:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.572 23:40:07 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@21 -- # val= 00:06:37.948 23:40:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@21 -- # val= 00:06:37.948 23:40:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@21 -- # val= 00:06:37.948 23:40:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@21 -- # val= 00:06:37.948 23:40:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@21 -- # val= 00:06:37.948 23:40:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@21 -- # val= 00:06:37.948 23:40:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.948 23:40:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.948 23:40:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.948 23:40:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:37.948 23:40:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.948 00:06:37.948 real 0m3.825s 00:06:37.948 user 0m3.400s 00:06:37.948 sys 0m0.221s 00:06:37.948 23:40:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.948 ************************************ 00:06:37.948 END TEST accel_xor 00:06:37.948 23:40:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.948 ************************************ 00:06:37.948 23:40:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:37.948 23:40:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:37.948 23:40:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.948 23:40:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.948 ************************************ 00:06:37.948 START TEST accel_dif_verify 00:06:37.948 ************************************ 00:06:37.948 23:40:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:37.948 23:40:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.948 23:40:08 -- accel/accel.sh@17 -- # local accel_module 00:06:37.948 23:40:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:37.948 23:40:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:37.948 23:40:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.948 23:40:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.948 23:40:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.948 23:40:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.948 23:40:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.948 23:40:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.948 23:40:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.948 23:40:08 -- accel/accel.sh@42 -- # jq -r . 00:06:37.948 [2024-12-13 23:40:08.669243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.948 [2024-12-13 23:40:08.669425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:06:38.206 [2024-12-13 23:40:08.814657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.464 [2024-12-13 23:40:09.004681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.364 23:40:10 -- accel/accel.sh@18 -- # out=' 00:06:40.364 SPDK Configuration: 00:06:40.364 Core mask: 0x1 00:06:40.364 00:06:40.364 Accel Perf Configuration: 00:06:40.364 Workload Type: dif_verify 00:06:40.364 Vector size: 4096 bytes 00:06:40.364 Transfer size: 4096 bytes 00:06:40.364 Block size: 512 bytes 00:06:40.364 Metadata size: 8 bytes 00:06:40.364 Vector count 1 00:06:40.364 Module: software 00:06:40.364 Queue depth: 32 00:06:40.364 Allocate depth: 32 00:06:40.364 # threads/core: 1 00:06:40.364 Run time: 1 seconds 00:06:40.364 Verify: No 00:06:40.364 00:06:40.364 Running for 1 seconds... 00:06:40.364 00:06:40.364 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.364 ------------------------------------------------------------------------------------ 00:06:40.364 0,0 97216/s 385 MiB/s 0 0 00:06:40.364 ==================================================================================== 00:06:40.364 Total 97216/s 379 MiB/s 0 0' 00:06:40.364 23:40:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.364 23:40:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.364 23:40:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:40.364 23:40:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:40.364 23:40:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.364 23:40:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.364 23:40:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.364 23:40:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.364 23:40:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.364 23:40:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.364 23:40:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.364 23:40:10 -- accel/accel.sh@42 -- # jq -r . 00:06:40.364 [2024-12-13 23:40:10.774934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.364 [2024-12-13 23:40:10.775063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:06:40.364 [2024-12-13 23:40:10.933451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.364 [2024-12-13 23:40:11.091338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.622 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.622 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.622 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.622 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.622 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=0x1 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=dif_verify 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=software 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=32 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=32 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=1 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val=No 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.623 23:40:11 -- accel/accel.sh@21 -- # val= 00:06:40.623 23:40:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.623 23:40:11 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@21 -- # val= 00:06:41.997 23:40:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # IFS=: 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@21 -- # val= 00:06:41.997 23:40:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # IFS=: 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@21 -- # val= 00:06:41.997 23:40:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # IFS=: 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@21 -- # val= 00:06:41.997 23:40:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # IFS=: 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@21 -- # val= 00:06:41.997 23:40:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # IFS=: 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@21 -- # val= 00:06:41.997 23:40:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # IFS=: 00:06:41.997 23:40:12 -- accel/accel.sh@20 -- # read -r var val 00:06:41.997 23:40:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.997 23:40:12 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:41.997 23:40:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.997 00:06:41.997 real 0m4.052s 00:06:41.997 user 0m3.596s 00:06:41.997 sys 0m0.250s 00:06:41.997 23:40:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.997 23:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.997 ************************************ 00:06:41.997 END TEST accel_dif_verify 00:06:41.997 ************************************ 00:06:41.997 23:40:12 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:41.997 23:40:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:41.997 23:40:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.997 23:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.997 ************************************ 00:06:41.997 START TEST accel_dif_generate 00:06:41.997 ************************************ 00:06:41.997 23:40:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:41.997 23:40:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.997 23:40:12 -- accel/accel.sh@17 -- # local accel_module 00:06:41.997 23:40:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:41.997 23:40:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:41.997 23:40:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.997 23:40:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.997 23:40:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.254 23:40:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.254 23:40:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.254 23:40:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.254 23:40:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.254 23:40:12 -- accel/accel.sh@42 -- # jq -r . 00:06:42.254 [2024-12-13 23:40:12.753577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.254 [2024-12-13 23:40:12.753680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:06:42.255 [2024-12-13 23:40:12.899084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.513 [2024-12-13 23:40:13.047877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.413 23:40:14 -- accel/accel.sh@18 -- # out=' 00:06:44.413 SPDK Configuration: 00:06:44.413 Core mask: 0x1 00:06:44.413 00:06:44.413 Accel Perf Configuration: 00:06:44.413 Workload Type: dif_generate 00:06:44.413 Vector size: 4096 bytes 00:06:44.413 Transfer size: 4096 bytes 00:06:44.413 Block size: 512 bytes 00:06:44.413 Metadata size: 8 bytes 00:06:44.413 Vector count 1 00:06:44.413 Module: software 00:06:44.413 Queue depth: 32 00:06:44.413 Allocate depth: 32 00:06:44.413 # threads/core: 1 00:06:44.413 Run time: 1 seconds 00:06:44.413 Verify: No 00:06:44.413 00:06:44.413 Running for 1 seconds... 00:06:44.413 00:06:44.413 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.413 ------------------------------------------------------------------------------------ 00:06:44.413 0,0 154592/s 613 MiB/s 0 0 00:06:44.413 ==================================================================================== 00:06:44.413 Total 154592/s 603 MiB/s 0 0' 00:06:44.413 23:40:14 -- accel/accel.sh@20 -- # IFS=: 00:06:44.413 23:40:14 -- accel/accel.sh@20 -- # read -r var val 00:06:44.413 23:40:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:44.413 23:40:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.413 23:40:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:44.413 23:40:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.413 23:40:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.413 23:40:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.413 23:40:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.413 23:40:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.413 23:40:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.413 23:40:14 -- accel/accel.sh@42 -- # jq -r . 00:06:44.413 [2024-12-13 23:40:14.674457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.413 [2024-12-13 23:40:14.674581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:06:44.413 [2024-12-13 23:40:14.822287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.413 [2024-12-13 23:40:14.971543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.413 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.413 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.413 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.413 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.413 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.413 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.413 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=0x1 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=dif_generate 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=software 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=32 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=32 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=1 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val=No 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.414 23:40:15 -- accel/accel.sh@21 -- # val= 00:06:44.414 23:40:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.414 23:40:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@21 -- # val= 00:06:46.317 23:40:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@21 -- # val= 00:06:46.317 23:40:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@21 -- # val= 00:06:46.317 23:40:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@21 -- # val= 00:06:46.317 23:40:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@21 -- # val= 00:06:46.317 23:40:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@21 -- # val= 00:06:46.317 23:40:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.317 23:40:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.317 23:40:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.317 23:40:16 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:46.317 23:40:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.317 00:06:46.317 real 0m3.844s 00:06:46.317 user 0m3.406s 00:06:46.317 sys 0m0.236s 00:06:46.317 23:40:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.317 23:40:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.317 ************************************ 00:06:46.317 END TEST accel_dif_generate 00:06:46.317 ************************************ 00:06:46.317 23:40:16 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:46.317 23:40:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:46.317 23:40:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.317 23:40:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.317 ************************************ 00:06:46.317 START TEST accel_dif_generate_copy 00:06:46.317 ************************************ 00:06:46.317 23:40:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:46.317 23:40:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.317 23:40:16 -- accel/accel.sh@17 -- # local accel_module 00:06:46.317 23:40:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.317 23:40:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.317 23:40:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.317 23:40:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.317 23:40:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.317 23:40:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.317 23:40:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.317 23:40:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.317 23:40:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.317 23:40:16 -- accel/accel.sh@42 -- # jq -r . 00:06:46.317 [2024-12-13 23:40:16.641944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.317 [2024-12-13 23:40:16.642058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59308 ] 00:06:46.317 [2024-12-13 23:40:16.789537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.317 [2024-12-13 23:40:16.966395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.240 23:40:18 -- accel/accel.sh@18 -- # out=' 00:06:48.240 SPDK Configuration: 00:06:48.240 Core mask: 0x1 00:06:48.240 00:06:48.240 Accel Perf Configuration: 00:06:48.240 Workload Type: dif_generate_copy 00:06:48.240 Vector size: 4096 bytes 00:06:48.240 Transfer size: 4096 bytes 00:06:48.240 Vector count 1 00:06:48.240 Module: software 00:06:48.240 Queue depth: 32 00:06:48.240 Allocate depth: 32 00:06:48.240 # threads/core: 1 00:06:48.240 Run time: 1 seconds 00:06:48.240 Verify: No 00:06:48.240 00:06:48.240 Running for 1 seconds... 00:06:48.240 00:06:48.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.240 ------------------------------------------------------------------------------------ 00:06:48.240 0,0 90592/s 359 MiB/s 0 0 00:06:48.240 ==================================================================================== 00:06:48.240 Total 90592/s 353 MiB/s 0 0' 00:06:48.240 23:40:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.240 23:40:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.240 23:40:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.240 23:40:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.240 23:40:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.240 23:40:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.240 23:40:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.240 23:40:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.240 23:40:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.240 23:40:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.241 23:40:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.241 23:40:18 -- accel/accel.sh@42 -- # jq -r . 00:06:48.241 [2024-12-13 23:40:18.741597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.241 [2024-12-13 23:40:18.741694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59334 ] 00:06:48.241 [2024-12-13 23:40:18.894049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.499 [2024-12-13 23:40:19.071585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=0x1 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=software 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=32 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=32 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=1 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val=No 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.499 23:40:19 -- accel/accel.sh@21 -- # val= 00:06:48.499 23:40:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # IFS=: 00:06:48.499 23:40:19 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 23:40:20 -- accel/accel.sh@21 -- # val= 00:06:50.400 23:40:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 23:40:20 -- accel/accel.sh@21 -- # val= 00:06:50.400 23:40:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 23:40:20 -- accel/accel.sh@21 -- # val= 00:06:50.400 23:40:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 23:40:20 -- accel/accel.sh@21 -- # val= 00:06:50.400 23:40:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 23:40:20 -- accel/accel.sh@21 -- # val= 00:06:50.400 23:40:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 23:40:20 -- accel/accel.sh@21 -- # val= 00:06:50.400 23:40:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.400 23:40:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.400 ************************************ 00:06:50.400 END TEST accel_dif_generate_copy 00:06:50.400 ************************************ 00:06:50.400 23:40:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.400 23:40:20 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:50.400 23:40:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.400 00:06:50.400 real 0m4.211s 00:06:50.400 user 0m3.751s 00:06:50.400 sys 0m0.254s 00:06:50.400 23:40:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.400 23:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.400 23:40:20 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:50.400 23:40:20 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.400 23:40:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:50.400 23:40:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.400 23:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.400 ************************************ 00:06:50.400 START TEST accel_comp 00:06:50.400 ************************************ 00:06:50.400 23:40:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.400 23:40:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.400 23:40:20 -- accel/accel.sh@17 -- # local accel_module 00:06:50.400 23:40:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.400 23:40:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.400 23:40:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.400 23:40:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.400 23:40:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.400 23:40:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.400 23:40:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.400 23:40:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.400 23:40:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.400 23:40:20 -- accel/accel.sh@42 -- # jq -r . 00:06:50.400 [2024-12-13 23:40:20.898973] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.400 [2024-12-13 23:40:20.899100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59375 ] 00:06:50.400 [2024-12-13 23:40:21.046687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.658 [2024-12-13 23:40:21.193470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.557 23:40:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:52.557 00:06:52.557 SPDK Configuration: 00:06:52.557 Core mask: 0x1 00:06:52.557 00:06:52.557 Accel Perf Configuration: 00:06:52.557 Workload Type: compress 00:06:52.557 Transfer size: 4096 bytes 00:06:52.557 Vector count 1 00:06:52.557 Module: software 00:06:52.557 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.557 Queue depth: 32 00:06:52.557 Allocate depth: 32 00:06:52.557 # threads/core: 1 00:06:52.557 Run time: 1 seconds 00:06:52.557 Verify: No 00:06:52.557 00:06:52.557 Running for 1 seconds... 00:06:52.557 00:06:52.557 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.557 ------------------------------------------------------------------------------------ 00:06:52.557 0,0 64384/s 268 MiB/s 0 0 00:06:52.557 ==================================================================================== 00:06:52.557 Total 64384/s 251 MiB/s 0 0' 00:06:52.557 23:40:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.557 23:40:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.557 23:40:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.557 23:40:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.557 23:40:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.557 23:40:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.557 23:40:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.557 23:40:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.557 23:40:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.557 23:40:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.557 23:40:22 -- accel/accel.sh@42 -- # jq -r . 00:06:52.557 23:40:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.557 [2024-12-13 23:40:22.832018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.557 [2024-12-13 23:40:22.833044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59401 ] 00:06:52.557 [2024-12-13 23:40:22.987776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.557 [2024-12-13 23:40:23.166718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.814 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.814 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=0x1 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=compress 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=software 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=32 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=32 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=1 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val=No 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 23:40:23 -- accel/accel.sh@21 -- # val= 00:06:52.815 23:40:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 23:40:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@21 -- # val= 00:06:54.188 23:40:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@21 -- # val= 00:06:54.188 23:40:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@21 -- # val= 00:06:54.188 23:40:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@21 -- # val= 00:06:54.188 23:40:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@21 -- # val= 00:06:54.188 23:40:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@21 -- # val= 00:06:54.188 23:40:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 23:40:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 23:40:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.188 ************************************ 00:06:54.188 END TEST accel_comp 00:06:54.188 ************************************ 00:06:54.188 23:40:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:54.188 23:40:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.188 00:06:54.188 real 0m3.951s 00:06:54.188 user 0m3.485s 00:06:54.188 sys 0m0.261s 00:06:54.188 23:40:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.188 23:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.188 23:40:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.188 23:40:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:54.188 23:40:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.188 23:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.188 ************************************ 00:06:54.188 START TEST accel_decomp 00:06:54.188 ************************************ 00:06:54.188 23:40:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.188 23:40:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.188 23:40:24 -- accel/accel.sh@17 -- # local accel_module 00:06:54.188 23:40:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.188 23:40:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.188 23:40:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.188 23:40:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.188 23:40:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.188 23:40:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.188 23:40:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.188 23:40:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.188 23:40:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.188 23:40:24 -- accel/accel.sh@42 -- # jq -r . 00:06:54.188 [2024-12-13 23:40:24.897285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.188 [2024-12-13 23:40:24.897390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:06:54.446 [2024-12-13 23:40:25.044972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.706 [2024-12-13 23:40:25.226167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.611 23:40:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.611 00:06:56.611 SPDK Configuration: 00:06:56.611 Core mask: 0x1 00:06:56.611 00:06:56.611 Accel Perf Configuration: 00:06:56.611 Workload Type: decompress 00:06:56.611 Transfer size: 4096 bytes 00:06:56.611 Vector count 1 00:06:56.611 Module: software 00:06:56.611 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.611 Queue depth: 32 00:06:56.611 Allocate depth: 32 00:06:56.611 # threads/core: 1 00:06:56.611 Run time: 1 seconds 00:06:56.611 Verify: Yes 00:06:56.611 00:06:56.611 Running for 1 seconds... 00:06:56.611 00:06:56.611 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.611 ------------------------------------------------------------------------------------ 00:06:56.611 0,0 66816/s 123 MiB/s 0 0 00:06:56.611 ==================================================================================== 00:06:56.611 Total 66816/s 261 MiB/s 0 0' 00:06:56.611 23:40:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.611 23:40:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.611 23:40:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.611 23:40:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.611 23:40:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.611 23:40:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.611 23:40:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.611 23:40:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.611 23:40:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.612 23:40:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.612 23:40:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.612 23:40:26 -- accel/accel.sh@42 -- # jq -r . 00:06:56.612 [2024-12-13 23:40:26.882490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.612 [2024-12-13 23:40:26.882574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:06:56.612 [2024-12-13 23:40:27.016559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.612 [2024-12-13 23:40:27.162552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=0x1 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=decompress 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=software 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=32 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=32 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=1 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val=Yes 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:56.612 23:40:27 -- accel/accel.sh@21 -- # val= 00:06:56.612 23:40:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # IFS=: 00:06:56.612 23:40:27 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@21 -- # val= 00:06:58.507 23:40:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@21 -- # val= 00:06:58.507 23:40:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@21 -- # val= 00:06:58.507 23:40:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@21 -- # val= 00:06:58.507 23:40:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@21 -- # val= 00:06:58.507 23:40:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@21 -- # val= 00:06:58.507 23:40:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.507 23:40:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.507 23:40:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.507 23:40:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:58.507 23:40:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.507 00:06:58.508 real 0m3.908s 00:06:58.508 user 0m3.470s 00:06:58.508 sys 0m0.233s 00:06:58.508 23:40:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.508 23:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.508 ************************************ 00:06:58.508 END TEST accel_decomp 00:06:58.508 ************************************ 00:06:58.508 23:40:28 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.508 23:40:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:58.508 23:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.508 23:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.508 ************************************ 00:06:58.508 START TEST accel_decmop_full 00:06:58.508 ************************************ 00:06:58.508 23:40:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.508 23:40:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.508 23:40:28 -- accel/accel.sh@17 -- # local accel_module 00:06:58.508 23:40:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.508 23:40:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.508 23:40:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.508 23:40:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.508 23:40:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.508 23:40:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.508 23:40:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.508 23:40:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.508 23:40:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.508 23:40:28 -- accel/accel.sh@42 -- # jq -r . 00:06:58.508 [2024-12-13 23:40:28.847298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.508 [2024-12-13 23:40:28.847395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59509 ] 00:06:58.508 [2024-12-13 23:40:28.993243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.508 [2024-12-13 23:40:29.143840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.406 23:40:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.406 00:07:00.406 SPDK Configuration: 00:07:00.406 Core mask: 0x1 00:07:00.406 00:07:00.406 Accel Perf Configuration: 00:07:00.406 Workload Type: decompress 00:07:00.406 Transfer size: 111250 bytes 00:07:00.406 Vector count 1 00:07:00.406 Module: software 00:07:00.406 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.406 Queue depth: 32 00:07:00.406 Allocate depth: 32 00:07:00.406 # threads/core: 1 00:07:00.406 Run time: 1 seconds 00:07:00.406 Verify: Yes 00:07:00.406 00:07:00.406 Running for 1 seconds... 00:07:00.406 00:07:00.406 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.406 ------------------------------------------------------------------------------------ 00:07:00.406 0,0 5312/s 219 MiB/s 0 0 00:07:00.406 ==================================================================================== 00:07:00.406 Total 5312/s 563 MiB/s 0 0' 00:07:00.406 23:40:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.406 23:40:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.406 23:40:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.406 23:40:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.406 23:40:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.406 23:40:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.406 23:40:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.406 23:40:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.406 23:40:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.407 23:40:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.407 23:40:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.407 23:40:30 -- accel/accel.sh@42 -- # jq -r . 00:07:00.407 [2024-12-13 23:40:30.795090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.407 [2024-12-13 23:40:30.795187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59538 ] 00:07:00.407 [2024-12-13 23:40:30.937365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.407 [2024-12-13 23:40:31.091293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.664 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.664 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.664 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.664 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.664 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.664 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.664 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.664 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.664 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.664 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.664 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.664 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=0x1 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=decompress 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=software 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=32 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=32 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=1 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val=Yes 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.665 23:40:31 -- accel/accel.sh@21 -- # val= 00:07:00.665 23:40:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.665 23:40:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@21 -- # val= 00:07:02.037 23:40:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # IFS=: 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@21 -- # val= 00:07:02.037 23:40:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # IFS=: 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@21 -- # val= 00:07:02.037 23:40:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # IFS=: 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@21 -- # val= 00:07:02.037 23:40:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # IFS=: 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@21 -- # val= 00:07:02.037 23:40:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # IFS=: 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@21 -- # val= 00:07:02.037 23:40:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # IFS=: 00:07:02.037 23:40:32 -- accel/accel.sh@20 -- # read -r var val 00:07:02.037 23:40:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.037 23:40:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:02.037 23:40:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.037 00:07:02.037 real 0m3.886s 00:07:02.037 user 0m3.451s 00:07:02.037 sys 0m0.227s 00:07:02.037 23:40:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.037 23:40:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.037 ************************************ 00:07:02.037 END TEST accel_decmop_full 00:07:02.037 ************************************ 00:07:02.037 23:40:32 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.037 23:40:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:02.037 23:40:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.037 23:40:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.037 ************************************ 00:07:02.037 START TEST accel_decomp_mcore 00:07:02.037 ************************************ 00:07:02.037 23:40:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.037 23:40:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.037 23:40:32 -- accel/accel.sh@17 -- # local accel_module 00:07:02.037 23:40:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.037 23:40:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.037 23:40:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.037 23:40:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.037 23:40:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.037 23:40:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.037 23:40:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.037 23:40:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.037 23:40:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.037 23:40:32 -- accel/accel.sh@42 -- # jq -r . 00:07:02.294 [2024-12-13 23:40:32.777720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.294 [2024-12-13 23:40:32.777826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59579 ] 00:07:02.294 [2024-12-13 23:40:32.936617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.551 [2024-12-13 23:40:33.128231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.551 [2024-12-13 23:40:33.128516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.551 [2024-12-13 23:40:33.128604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.551 [2024-12-13 23:40:33.128696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.448 23:40:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.448 00:07:04.448 SPDK Configuration: 00:07:04.448 Core mask: 0xf 00:07:04.448 00:07:04.448 Accel Perf Configuration: 00:07:04.448 Workload Type: decompress 00:07:04.448 Transfer size: 4096 bytes 00:07:04.448 Vector count 1 00:07:04.448 Module: software 00:07:04.448 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.448 Queue depth: 32 00:07:04.448 Allocate depth: 32 00:07:04.448 # threads/core: 1 00:07:04.448 Run time: 1 seconds 00:07:04.448 Verify: Yes 00:07:04.448 00:07:04.448 Running for 1 seconds... 00:07:04.448 00:07:04.448 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.448 ------------------------------------------------------------------------------------ 00:07:04.448 0,0 52480/s 96 MiB/s 0 0 00:07:04.448 3,0 52416/s 96 MiB/s 0 0 00:07:04.448 2,0 56096/s 103 MiB/s 0 0 00:07:04.448 1,0 52576/s 96 MiB/s 0 0 00:07:04.448 ==================================================================================== 00:07:04.448 Total 213568/s 834 MiB/s 0 0' 00:07:04.448 23:40:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.448 23:40:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.448 23:40:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:04.448 23:40:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:04.448 23:40:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.448 23:40:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.448 23:40:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.448 23:40:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.448 23:40:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.448 23:40:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.449 23:40:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.449 23:40:34 -- accel/accel.sh@42 -- # jq -r . 00:07:04.449 [2024-12-13 23:40:34.944328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.449 [2024-12-13 23:40:34.944546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:07:04.449 [2024-12-13 23:40:35.093190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.706 [2024-12-13 23:40:35.272060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.706 [2024-12-13 23:40:35.272411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.706 [2024-12-13 23:40:35.272572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.706 [2024-12-13 23:40:35.272594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=0xf 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=decompress 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=software 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=32 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=32 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=1 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val=Yes 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.706 23:40:35 -- accel/accel.sh@21 -- # val= 00:07:04.706 23:40:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # IFS=: 00:07:04.706 23:40:35 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@21 -- # val= 00:07:06.628 23:40:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.628 23:40:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.628 23:40:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.628 23:40:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.628 23:40:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.628 00:07:06.628 real 0m4.170s 00:07:06.628 user 0m12.396s 00:07:06.628 sys 0m0.297s 00:07:06.628 23:40:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.628 23:40:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 ************************************ 00:07:06.628 END TEST accel_decomp_mcore 00:07:06.628 ************************************ 00:07:06.628 23:40:36 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.628 23:40:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:06.628 23:40:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.628 23:40:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 ************************************ 00:07:06.628 START TEST accel_decomp_full_mcore 00:07:06.628 ************************************ 00:07:06.628 23:40:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.628 23:40:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.628 23:40:36 -- accel/accel.sh@17 -- # local accel_module 00:07:06.628 23:40:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.628 23:40:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.628 23:40:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.628 23:40:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.628 23:40:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.628 23:40:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.628 23:40:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.628 23:40:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.628 23:40:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.628 23:40:36 -- accel/accel.sh@42 -- # jq -r . 00:07:06.628 [2024-12-13 23:40:36.992760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.628 [2024-12-13 23:40:36.992864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:07:06.628 [2024-12-13 23:40:37.138756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.628 [2024-12-13 23:40:37.286201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.628 [2024-12-13 23:40:37.286642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.628 [2024-12-13 23:40:37.286793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.628 [2024-12-13 23:40:37.286796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.526 23:40:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.526 00:07:08.526 SPDK Configuration: 00:07:08.526 Core mask: 0xf 00:07:08.526 00:07:08.526 Accel Perf Configuration: 00:07:08.526 Workload Type: decompress 00:07:08.526 Transfer size: 111250 bytes 00:07:08.526 Vector count 1 00:07:08.526 Module: software 00:07:08.526 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.526 Queue depth: 32 00:07:08.526 Allocate depth: 32 00:07:08.526 # threads/core: 1 00:07:08.526 Run time: 1 seconds 00:07:08.526 Verify: Yes 00:07:08.526 00:07:08.526 Running for 1 seconds... 00:07:08.526 00:07:08.526 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.526 ------------------------------------------------------------------------------------ 00:07:08.526 0,0 5568/s 230 MiB/s 0 0 00:07:08.526 3,0 4288/s 177 MiB/s 0 0 00:07:08.526 2,0 5568/s 230 MiB/s 0 0 00:07:08.526 1,0 4288/s 177 MiB/s 0 0 00:07:08.526 ==================================================================================== 00:07:08.526 Total 19712/s 2091 MiB/s 0 0' 00:07:08.526 23:40:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.526 23:40:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.526 23:40:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.526 23:40:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.526 23:40:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.526 23:40:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.526 23:40:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.526 23:40:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.526 23:40:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.526 23:40:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.526 23:40:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.526 23:40:38 -- accel/accel.sh@42 -- # jq -r . 00:07:08.526 [2024-12-13 23:40:38.952200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.526 [2024-12-13 23:40:38.952309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:07:08.526 [2024-12-13 23:40:39.098788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.526 [2024-12-13 23:40:39.239603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.526 [2024-12-13 23:40:39.239816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.526 [2024-12-13 23:40:39.239921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.526 [2024-12-13 23:40:39.239924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=0xf 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=decompress 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=software 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=32 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=32 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=1 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val=Yes 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.784 23:40:39 -- accel/accel.sh@21 -- # val= 00:07:08.784 23:40:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.784 23:40:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@21 -- # val= 00:07:10.158 23:40:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.158 23:40:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.158 23:40:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.158 23:40:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:10.158 ************************************ 00:07:10.158 END TEST accel_decomp_full_mcore 00:07:10.158 ************************************ 00:07:10.158 23:40:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.158 00:07:10.158 real 0m3.919s 00:07:10.158 user 0m11.894s 00:07:10.158 sys 0m0.272s 00:07:10.158 23:40:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.158 23:40:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 23:40:40 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:10.417 23:40:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:10.417 23:40:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.417 23:40:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 ************************************ 00:07:10.417 START TEST accel_decomp_mthread 00:07:10.417 ************************************ 00:07:10.417 23:40:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:10.417 23:40:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.417 23:40:40 -- accel/accel.sh@17 -- # local accel_module 00:07:10.417 23:40:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:10.417 23:40:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:10.417 23:40:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.417 23:40:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.417 23:40:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.417 23:40:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.417 23:40:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.417 23:40:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.417 23:40:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.417 23:40:40 -- accel/accel.sh@42 -- # jq -r . 00:07:10.417 [2024-12-13 23:40:40.964509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.417 [2024-12-13 23:40:40.964624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:07:10.417 [2024-12-13 23:40:41.113121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.675 [2024-12-13 23:40:41.253085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.573 23:40:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.573 00:07:12.573 SPDK Configuration: 00:07:12.573 Core mask: 0x1 00:07:12.573 00:07:12.573 Accel Perf Configuration: 00:07:12.573 Workload Type: decompress 00:07:12.573 Transfer size: 4096 bytes 00:07:12.573 Vector count 1 00:07:12.573 Module: software 00:07:12.573 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.573 Queue depth: 32 00:07:12.573 Allocate depth: 32 00:07:12.573 # threads/core: 2 00:07:12.573 Run time: 1 seconds 00:07:12.573 Verify: Yes 00:07:12.573 00:07:12.573 Running for 1 seconds... 00:07:12.573 00:07:12.573 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.573 ------------------------------------------------------------------------------------ 00:07:12.573 0,1 41088/s 75 MiB/s 0 0 00:07:12.573 0,0 40960/s 75 MiB/s 0 0 00:07:12.573 ==================================================================================== 00:07:12.573 Total 82048/s 320 MiB/s 0 0' 00:07:12.573 23:40:42 -- accel/accel.sh@20 -- # IFS=: 00:07:12.573 23:40:42 -- accel/accel.sh@20 -- # read -r var val 00:07:12.573 23:40:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:12.573 23:40:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.573 23:40:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:12.573 23:40:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.573 23:40:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.573 23:40:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.573 23:40:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.573 23:40:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.573 23:40:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.573 23:40:42 -- accel/accel.sh@42 -- # jq -r . 00:07:12.573 [2024-12-13 23:40:42.877277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.574 [2024-12-13 23:40:42.877381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:07:12.574 [2024-12-13 23:40:43.024235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.574 [2024-12-13 23:40:43.167536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val=0x1 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val=decompress 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val=software 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.574 23:40:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.574 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.574 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val=32 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val=32 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val=2 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val=Yes 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.831 23:40:43 -- accel/accel.sh@21 -- # val= 00:07:12.831 23:40:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.831 23:40:43 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@21 -- # val= 00:07:14.205 23:40:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.205 23:40:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.205 23:40:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.205 23:40:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.205 23:40:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.205 00:07:14.205 real 0m3.847s 00:07:14.205 user 0m3.402s 00:07:14.205 sys 0m0.242s 00:07:14.205 23:40:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.205 23:40:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.205 ************************************ 00:07:14.205 END TEST accel_decomp_mthread 00:07:14.205 ************************************ 00:07:14.205 23:40:44 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.205 23:40:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:14.205 23:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.205 23:40:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.205 ************************************ 00:07:14.205 START TEST accel_deomp_full_mthread 00:07:14.205 ************************************ 00:07:14.205 23:40:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.205 23:40:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.205 23:40:44 -- accel/accel.sh@17 -- # local accel_module 00:07:14.205 23:40:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.205 23:40:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.205 23:40:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.205 23:40:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.205 23:40:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.205 23:40:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.205 23:40:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.205 23:40:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.205 23:40:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.205 23:40:44 -- accel/accel.sh@42 -- # jq -r . 00:07:14.205 [2024-12-13 23:40:44.843779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.205 [2024-12-13 23:40:44.844039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59787 ] 00:07:14.463 [2024-12-13 23:40:44.991961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.463 [2024-12-13 23:40:45.134861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.359 23:40:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.359 00:07:16.359 SPDK Configuration: 00:07:16.359 Core mask: 0x1 00:07:16.359 00:07:16.359 Accel Perf Configuration: 00:07:16.359 Workload Type: decompress 00:07:16.359 Transfer size: 111250 bytes 00:07:16.359 Vector count 1 00:07:16.359 Module: software 00:07:16.359 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.359 Queue depth: 32 00:07:16.359 Allocate depth: 32 00:07:16.359 # threads/core: 2 00:07:16.359 Run time: 1 seconds 00:07:16.359 Verify: Yes 00:07:16.359 00:07:16.359 Running for 1 seconds... 00:07:16.359 00:07:16.359 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.359 ------------------------------------------------------------------------------------ 00:07:16.359 0,1 2880/s 118 MiB/s 0 0 00:07:16.359 0,0 2816/s 116 MiB/s 0 0 00:07:16.359 ==================================================================================== 00:07:16.359 Total 5696/s 604 MiB/s 0 0' 00:07:16.359 23:40:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.359 23:40:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.359 23:40:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.359 23:40:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.359 23:40:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.359 23:40:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.359 23:40:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.359 23:40:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.359 23:40:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.359 23:40:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.359 23:40:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.359 23:40:46 -- accel/accel.sh@42 -- # jq -r . 00:07:16.359 [2024-12-13 23:40:46.782507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.359 [2024-12-13 23:40:46.782609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 00:07:16.359 [2024-12-13 23:40:46.929314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.359 [2024-12-13 23:40:47.074610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.617 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.617 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.617 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.617 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.617 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.617 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.617 23:40:47 -- accel/accel.sh@21 -- # val=0x1 00:07:16.617 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.617 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.617 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.617 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.617 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.617 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=decompress 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=software 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=32 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=32 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=2 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val=Yes 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.618 23:40:47 -- accel/accel.sh@21 -- # val= 00:07:16.618 23:40:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.618 23:40:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 23:40:48 -- accel/accel.sh@21 -- # val= 00:07:17.990 23:40:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.990 23:40:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.990 ************************************ 00:07:17.990 END TEST accel_deomp_full_mthread 00:07:17.990 ************************************ 00:07:17.990 23:40:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.990 23:40:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.990 23:40:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.990 00:07:17.990 real 0m3.874s 00:07:17.990 user 0m3.428s 00:07:17.990 sys 0m0.241s 00:07:17.990 23:40:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.990 23:40:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.266 23:40:48 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:18.266 23:40:48 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.266 23:40:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:18.266 23:40:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.266 23:40:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.266 23:40:48 -- accel/accel.sh@129 -- # build_accel_config 00:07:18.266 23:40:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.266 23:40:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.266 23:40:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.266 23:40:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.266 23:40:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.266 23:40:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.266 23:40:48 -- accel/accel.sh@42 -- # jq -r . 00:07:18.266 ************************************ 00:07:18.266 START TEST accel_dif_functional_tests 00:07:18.266 ************************************ 00:07:18.266 23:40:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.266 [2024-12-13 23:40:48.784046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.266 [2024-12-13 23:40:48.784149] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:07:18.266 [2024-12-13 23:40:48.926782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.524 [2024-12-13 23:40:49.076467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.524 [2024-12-13 23:40:49.076594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.524 [2024-12-13 23:40:49.076616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.782 00:07:18.782 00:07:18.782 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.782 http://cunit.sourceforge.net/ 00:07:18.782 00:07:18.782 00:07:18.782 Suite: accel_dif 00:07:18.782 Test: verify: DIF generated, GUARD check ...passed 00:07:18.782 Test: verify: DIF generated, APPTAG check ...passed 00:07:18.782 Test: verify: DIF generated, REFTAG check ...passed 00:07:18.782 Test: verify: DIF not generated, GUARD check ...passed 00:07:18.782 Test: verify: DIF not generated, APPTAG check ...passed 00:07:18.782 Test: verify: DIF not generated, REFTAG check ...passed 00:07:18.782 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:18.782 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:18.782 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:18.782 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:18.782 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:18.782 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:18.782 Test: generate copy: DIF generated, GUARD check ...passed 00:07:18.782 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:18.782 Test: generate copy: DIF generated, REFTAG check ...[2024-12-13 23:40:49.254517] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:18.782 [2024-12-13 23:40:49.254574] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:18.782 [2024-12-13 23:40:49.254615] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:18.782 [2024-12-13 23:40:49.254643] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:18.782 [2024-12-13 23:40:49.254664] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:18.782 [2024-12-13 23:40:49.254682] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:18.782 [2024-12-13 23:40:49.254736] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:18.782 [2024-12-13 23:40:49.254882] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:18.782 passed 00:07:18.782 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:18.782 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:18.782 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:18.782 Test: generate copy: iovecs-len validate ...passed 00:07:18.782 Test: generate copy: buffer alignment validate ...passed 00:07:18.782 00:07:18.782 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.782 suites 1 1 n/a 0 0 00:07:18.782 tests 20 20 20 0 0 00:07:18.782 asserts 204 204 204 0 n/a 00:07:18.782 00:07:18.782 Elapsed time = 0.003 seconds 00:07:18.782 [2024-12-13 23:40:49.255150] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:19.347 00:07:19.347 real 0m1.152s 00:07:19.347 user 0m2.052s 00:07:19.347 sys 0m0.168s 00:07:19.347 23:40:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.347 23:40:49 -- common/autotest_common.sh@10 -- # set +x 00:07:19.347 ************************************ 00:07:19.347 END TEST accel_dif_functional_tests 00:07:19.347 ************************************ 00:07:19.347 00:07:19.347 real 1m25.837s 00:07:19.347 user 1m33.663s 00:07:19.347 sys 0m6.335s 00:07:19.347 ************************************ 00:07:19.347 END TEST accel 00:07:19.347 ************************************ 00:07:19.347 23:40:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.347 23:40:49 -- common/autotest_common.sh@10 -- # set +x 00:07:19.347 23:40:49 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:19.347 23:40:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.347 23:40:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.347 23:40:49 -- common/autotest_common.sh@10 -- # set +x 00:07:19.347 ************************************ 00:07:19.347 START TEST accel_rpc 00:07:19.347 ************************************ 00:07:19.347 23:40:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:19.347 * Looking for test storage... 00:07:19.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:19.347 23:40:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:19.347 23:40:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:19.347 23:40:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:19.605 23:40:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:19.605 23:40:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:19.605 23:40:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:19.605 23:40:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:19.605 23:40:50 -- scripts/common.sh@335 -- # IFS=.-: 00:07:19.605 23:40:50 -- scripts/common.sh@335 -- # read -ra ver1 00:07:19.605 23:40:50 -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.605 23:40:50 -- scripts/common.sh@336 -- # read -ra ver2 00:07:19.605 23:40:50 -- scripts/common.sh@337 -- # local 'op=<' 00:07:19.605 23:40:50 -- scripts/common.sh@339 -- # ver1_l=2 00:07:19.605 23:40:50 -- scripts/common.sh@340 -- # ver2_l=1 00:07:19.605 23:40:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:19.605 23:40:50 -- scripts/common.sh@343 -- # case "$op" in 00:07:19.605 23:40:50 -- scripts/common.sh@344 -- # : 1 00:07:19.605 23:40:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:19.605 23:40:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.605 23:40:50 -- scripts/common.sh@364 -- # decimal 1 00:07:19.605 23:40:50 -- scripts/common.sh@352 -- # local d=1 00:07:19.605 23:40:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.605 23:40:50 -- scripts/common.sh@354 -- # echo 1 00:07:19.605 23:40:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:19.605 23:40:50 -- scripts/common.sh@365 -- # decimal 2 00:07:19.605 23:40:50 -- scripts/common.sh@352 -- # local d=2 00:07:19.605 23:40:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.605 23:40:50 -- scripts/common.sh@354 -- # echo 2 00:07:19.605 23:40:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:19.605 23:40:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:19.605 23:40:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:19.605 23:40:50 -- scripts/common.sh@367 -- # return 0 00:07:19.605 23:40:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.605 23:40:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:19.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.605 --rc genhtml_branch_coverage=1 00:07:19.605 --rc genhtml_function_coverage=1 00:07:19.605 --rc genhtml_legend=1 00:07:19.605 --rc geninfo_all_blocks=1 00:07:19.605 --rc geninfo_unexecuted_blocks=1 00:07:19.605 00:07:19.605 ' 00:07:19.605 23:40:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:19.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.605 --rc genhtml_branch_coverage=1 00:07:19.605 --rc genhtml_function_coverage=1 00:07:19.605 --rc genhtml_legend=1 00:07:19.605 --rc geninfo_all_blocks=1 00:07:19.605 --rc geninfo_unexecuted_blocks=1 00:07:19.605 00:07:19.605 ' 00:07:19.605 23:40:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:19.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.605 --rc genhtml_branch_coverage=1 00:07:19.605 --rc genhtml_function_coverage=1 00:07:19.605 --rc genhtml_legend=1 00:07:19.605 --rc geninfo_all_blocks=1 00:07:19.605 --rc geninfo_unexecuted_blocks=1 00:07:19.605 00:07:19.605 ' 00:07:19.605 23:40:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:19.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.605 --rc genhtml_branch_coverage=1 00:07:19.605 --rc genhtml_function_coverage=1 00:07:19.605 --rc genhtml_legend=1 00:07:19.605 --rc geninfo_all_blocks=1 00:07:19.605 --rc geninfo_unexecuted_blocks=1 00:07:19.605 00:07:19.605 ' 00:07:19.605 23:40:50 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:19.605 23:40:50 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59933 00:07:19.605 23:40:50 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:19.605 23:40:50 -- accel/accel_rpc.sh@15 -- # waitforlisten 59933 00:07:19.605 23:40:50 -- common/autotest_common.sh@829 -- # '[' -z 59933 ']' 00:07:19.605 23:40:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.605 23:40:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.605 23:40:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.605 23:40:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.605 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:07:19.605 [2024-12-13 23:40:50.161220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.605 [2024-12-13 23:40:50.161465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59933 ] 00:07:19.605 [2024-12-13 23:40:50.307777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.863 [2024-12-13 23:40:50.454836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.863 [2024-12-13 23:40:50.454995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.429 23:40:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.429 23:40:50 -- common/autotest_common.sh@862 -- # return 0 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:20.429 23:40:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.429 23:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.429 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 ************************************ 00:07:20.429 START TEST accel_assign_opcode 00:07:20.429 ************************************ 00:07:20.429 23:40:50 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:20.429 23:40:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.429 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 [2024-12-13 23:40:50.987654] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:20.429 23:40:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:20.429 23:40:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.429 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 [2024-12-13 23:40:50.995628] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:20.429 23:40:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.429 23:40:50 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:20.429 23:40:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.429 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.994 23:40:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.994 23:40:51 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:20.994 23:40:51 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:20.994 23:40:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.994 23:40:51 -- accel/accel_rpc.sh@42 -- # grep software 00:07:20.994 23:40:51 -- common/autotest_common.sh@10 -- # set +x 00:07:20.994 23:40:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.994 software 00:07:20.994 ************************************ 00:07:20.994 END TEST accel_assign_opcode 00:07:20.994 ************************************ 00:07:20.994 00:07:20.994 real 0m0.481s 00:07:20.994 user 0m0.031s 00:07:20.994 sys 0m0.011s 00:07:20.994 23:40:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.994 23:40:51 -- common/autotest_common.sh@10 -- # set +x 00:07:20.994 23:40:51 -- accel/accel_rpc.sh@55 -- # killprocess 59933 00:07:20.994 23:40:51 -- common/autotest_common.sh@936 -- # '[' -z 59933 ']' 00:07:20.994 23:40:51 -- common/autotest_common.sh@940 -- # kill -0 59933 00:07:20.994 23:40:51 -- common/autotest_common.sh@941 -- # uname 00:07:20.994 23:40:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.994 23:40:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59933 00:07:20.994 killing process with pid 59933 00:07:20.994 23:40:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:20.994 23:40:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:20.994 23:40:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59933' 00:07:20.994 23:40:51 -- common/autotest_common.sh@955 -- # kill 59933 00:07:20.994 23:40:51 -- common/autotest_common.sh@960 -- # wait 59933 00:07:22.370 00:07:22.370 real 0m2.780s 00:07:22.370 user 0m2.747s 00:07:22.370 sys 0m0.388s 00:07:22.370 23:40:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.370 23:40:52 -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 ************************************ 00:07:22.370 END TEST accel_rpc 00:07:22.370 ************************************ 00:07:22.370 23:40:52 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.370 23:40:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.370 23:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.370 23:40:52 -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 ************************************ 00:07:22.370 START TEST app_cmdline 00:07:22.370 ************************************ 00:07:22.370 23:40:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.370 * Looking for test storage... 00:07:22.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.370 23:40:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.370 23:40:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.370 23:40:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.370 23:40:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.370 23:40:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.370 23:40:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.370 23:40:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.370 23:40:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.370 23:40:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.370 23:40:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.370 23:40:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.370 23:40:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.370 23:40:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.370 23:40:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.370 23:40:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.370 23:40:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.370 23:40:52 -- scripts/common.sh@344 -- # : 1 00:07:22.370 23:40:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.370 23:40:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.370 23:40:52 -- scripts/common.sh@364 -- # decimal 1 00:07:22.370 23:40:52 -- scripts/common.sh@352 -- # local d=1 00:07:22.370 23:40:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.370 23:40:52 -- scripts/common.sh@354 -- # echo 1 00:07:22.370 23:40:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.370 23:40:52 -- scripts/common.sh@365 -- # decimal 2 00:07:22.370 23:40:52 -- scripts/common.sh@352 -- # local d=2 00:07:22.370 23:40:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.370 23:40:52 -- scripts/common.sh@354 -- # echo 2 00:07:22.370 23:40:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.370 23:40:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.370 23:40:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.370 23:40:52 -- scripts/common.sh@367 -- # return 0 00:07:22.370 23:40:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.370 23:40:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.370 --rc genhtml_branch_coverage=1 00:07:22.370 --rc genhtml_function_coverage=1 00:07:22.370 --rc genhtml_legend=1 00:07:22.370 --rc geninfo_all_blocks=1 00:07:22.370 --rc geninfo_unexecuted_blocks=1 00:07:22.370 00:07:22.370 ' 00:07:22.370 23:40:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.370 --rc genhtml_branch_coverage=1 00:07:22.370 --rc genhtml_function_coverage=1 00:07:22.370 --rc genhtml_legend=1 00:07:22.370 --rc geninfo_all_blocks=1 00:07:22.370 --rc geninfo_unexecuted_blocks=1 00:07:22.370 00:07:22.370 ' 00:07:22.370 23:40:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.370 --rc genhtml_branch_coverage=1 00:07:22.370 --rc genhtml_function_coverage=1 00:07:22.370 --rc genhtml_legend=1 00:07:22.370 --rc geninfo_all_blocks=1 00:07:22.370 --rc geninfo_unexecuted_blocks=1 00:07:22.370 00:07:22.370 ' 00:07:22.370 23:40:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.370 --rc genhtml_branch_coverage=1 00:07:22.370 --rc genhtml_function_coverage=1 00:07:22.370 --rc genhtml_legend=1 00:07:22.370 --rc geninfo_all_blocks=1 00:07:22.370 --rc geninfo_unexecuted_blocks=1 00:07:22.370 00:07:22.370 ' 00:07:22.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.370 23:40:52 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.370 23:40:52 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60045 00:07:22.370 23:40:52 -- app/cmdline.sh@18 -- # waitforlisten 60045 00:07:22.370 23:40:52 -- common/autotest_common.sh@829 -- # '[' -z 60045 ']' 00:07:22.370 23:40:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.370 23:40:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.370 23:40:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.370 23:40:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.370 23:40:52 -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 23:40:52 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.370 [2024-12-13 23:40:53.002034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.370 [2024-12-13 23:40:53.002146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60045 ] 00:07:22.629 [2024-12-13 23:40:53.152979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.629 [2024-12-13 23:40:53.319215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.629 [2024-12-13 23:40:53.319570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.194 23:40:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.194 23:40:53 -- common/autotest_common.sh@862 -- # return 0 00:07:23.194 23:40:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:23.452 { 00:07:23.452 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:23.452 "fields": { 00:07:23.452 "major": 24, 00:07:23.452 "minor": 1, 00:07:23.452 "patch": 1, 00:07:23.452 "suffix": "-pre", 00:07:23.452 "commit": "c13c99a5e" 00:07:23.452 } 00:07:23.452 } 00:07:23.452 23:40:54 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.452 23:40:54 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.452 23:40:54 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.452 23:40:54 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.452 23:40:54 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.452 23:40:54 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.452 23:40:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.452 23:40:54 -- app/cmdline.sh@26 -- # sort 00:07:23.452 23:40:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.452 23:40:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.452 23:40:54 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.452 23:40:54 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.452 23:40:54 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.452 23:40:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:23.452 23:40:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.452 23:40:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.452 23:40:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.452 23:40:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.452 23:40:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.452 23:40:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.452 23:40:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.452 23:40:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.452 23:40:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:23.452 23:40:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.710 request: 00:07:23.710 { 00:07:23.710 "method": "env_dpdk_get_mem_stats", 00:07:23.710 "req_id": 1 00:07:23.710 } 00:07:23.710 Got JSON-RPC error response 00:07:23.710 response: 00:07:23.710 { 00:07:23.710 "code": -32601, 00:07:23.710 "message": "Method not found" 00:07:23.710 } 00:07:23.710 23:40:54 -- common/autotest_common.sh@653 -- # es=1 00:07:23.710 23:40:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.710 23:40:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.710 23:40:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.710 23:40:54 -- app/cmdline.sh@1 -- # killprocess 60045 00:07:23.710 23:40:54 -- common/autotest_common.sh@936 -- # '[' -z 60045 ']' 00:07:23.710 23:40:54 -- common/autotest_common.sh@940 -- # kill -0 60045 00:07:23.710 23:40:54 -- common/autotest_common.sh@941 -- # uname 00:07:23.710 23:40:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.710 23:40:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60045 00:07:23.710 killing process with pid 60045 00:07:23.710 23:40:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.710 23:40:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.710 23:40:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60045' 00:07:23.710 23:40:54 -- common/autotest_common.sh@955 -- # kill 60045 00:07:23.710 23:40:54 -- common/autotest_common.sh@960 -- # wait 60045 00:07:25.083 ************************************ 00:07:25.083 END TEST app_cmdline 00:07:25.083 ************************************ 00:07:25.083 00:07:25.083 real 0m2.681s 00:07:25.083 user 0m2.957s 00:07:25.083 sys 0m0.412s 00:07:25.083 23:40:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.083 23:40:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.083 23:40:55 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.083 23:40:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.083 23:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.083 23:40:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.083 ************************************ 00:07:25.083 START TEST version 00:07:25.083 ************************************ 00:07:25.083 23:40:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.083 * Looking for test storage... 00:07:25.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.083 23:40:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.083 23:40:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.083 23:40:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.083 23:40:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.083 23:40:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.083 23:40:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.083 23:40:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.083 23:40:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.083 23:40:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.083 23:40:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.083 23:40:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.083 23:40:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.083 23:40:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.083 23:40:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.083 23:40:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.083 23:40:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.083 23:40:55 -- scripts/common.sh@344 -- # : 1 00:07:25.083 23:40:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.083 23:40:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.083 23:40:55 -- scripts/common.sh@364 -- # decimal 1 00:07:25.083 23:40:55 -- scripts/common.sh@352 -- # local d=1 00:07:25.083 23:40:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.083 23:40:55 -- scripts/common.sh@354 -- # echo 1 00:07:25.083 23:40:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.083 23:40:55 -- scripts/common.sh@365 -- # decimal 2 00:07:25.083 23:40:55 -- scripts/common.sh@352 -- # local d=2 00:07:25.083 23:40:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.083 23:40:55 -- scripts/common.sh@354 -- # echo 2 00:07:25.084 23:40:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.084 23:40:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.084 23:40:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.084 23:40:55 -- scripts/common.sh@367 -- # return 0 00:07:25.084 23:40:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.084 23:40:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.084 --rc genhtml_branch_coverage=1 00:07:25.084 --rc genhtml_function_coverage=1 00:07:25.084 --rc genhtml_legend=1 00:07:25.084 --rc geninfo_all_blocks=1 00:07:25.084 --rc geninfo_unexecuted_blocks=1 00:07:25.084 00:07:25.084 ' 00:07:25.084 23:40:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.084 --rc genhtml_branch_coverage=1 00:07:25.084 --rc genhtml_function_coverage=1 00:07:25.084 --rc genhtml_legend=1 00:07:25.084 --rc geninfo_all_blocks=1 00:07:25.084 --rc geninfo_unexecuted_blocks=1 00:07:25.084 00:07:25.084 ' 00:07:25.084 23:40:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.084 --rc genhtml_branch_coverage=1 00:07:25.084 --rc genhtml_function_coverage=1 00:07:25.084 --rc genhtml_legend=1 00:07:25.084 --rc geninfo_all_blocks=1 00:07:25.084 --rc geninfo_unexecuted_blocks=1 00:07:25.084 00:07:25.084 ' 00:07:25.084 23:40:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.084 --rc genhtml_branch_coverage=1 00:07:25.084 --rc genhtml_function_coverage=1 00:07:25.084 --rc genhtml_legend=1 00:07:25.084 --rc geninfo_all_blocks=1 00:07:25.084 --rc geninfo_unexecuted_blocks=1 00:07:25.084 00:07:25.084 ' 00:07:25.084 23:40:55 -- app/version.sh@17 -- # get_header_version major 00:07:25.084 23:40:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.084 23:40:55 -- app/version.sh@14 -- # cut -f2 00:07:25.084 23:40:55 -- app/version.sh@14 -- # tr -d '"' 00:07:25.084 23:40:55 -- app/version.sh@17 -- # major=24 00:07:25.084 23:40:55 -- app/version.sh@18 -- # get_header_version minor 00:07:25.084 23:40:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.084 23:40:55 -- app/version.sh@14 -- # tr -d '"' 00:07:25.084 23:40:55 -- app/version.sh@14 -- # cut -f2 00:07:25.084 23:40:55 -- app/version.sh@18 -- # minor=1 00:07:25.084 23:40:55 -- app/version.sh@19 -- # get_header_version patch 00:07:25.084 23:40:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.084 23:40:55 -- app/version.sh@14 -- # cut -f2 00:07:25.084 23:40:55 -- app/version.sh@14 -- # tr -d '"' 00:07:25.084 23:40:55 -- app/version.sh@19 -- # patch=1 00:07:25.084 23:40:55 -- app/version.sh@20 -- # get_header_version suffix 00:07:25.084 23:40:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.084 23:40:55 -- app/version.sh@14 -- # cut -f2 00:07:25.084 23:40:55 -- app/version.sh@14 -- # tr -d '"' 00:07:25.084 23:40:55 -- app/version.sh@20 -- # suffix=-pre 00:07:25.084 23:40:55 -- app/version.sh@22 -- # version=24.1 00:07:25.084 23:40:55 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.084 23:40:55 -- app/version.sh@25 -- # version=24.1.1 00:07:25.084 23:40:55 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:25.084 23:40:55 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:25.084 23:40:55 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.084 23:40:55 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:25.084 23:40:55 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:25.084 ************************************ 00:07:25.084 END TEST version 00:07:25.084 ************************************ 00:07:25.084 00:07:25.084 real 0m0.190s 00:07:25.084 user 0m0.118s 00:07:25.084 sys 0m0.100s 00:07:25.084 23:40:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.084 23:40:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 23:40:55 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:25.084 23:40:55 -- spdk/autotest.sh@191 -- # uname -s 00:07:25.084 23:40:55 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:25.084 23:40:55 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.084 23:40:55 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.084 23:40:55 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:25.084 23:40:55 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:25.084 23:40:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.084 23:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.084 23:40:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 ************************************ 00:07:25.084 START TEST blockdev_nvme 00:07:25.084 ************************************ 00:07:25.084 23:40:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:25.084 * Looking for test storage... 00:07:25.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:25.084 23:40:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.084 23:40:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.084 23:40:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.341 23:40:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.341 23:40:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.341 23:40:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.342 23:40:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.342 23:40:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.342 23:40:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.342 23:40:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.342 23:40:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.342 23:40:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.342 23:40:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.342 23:40:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.342 23:40:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.342 23:40:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.342 23:40:55 -- scripts/common.sh@344 -- # : 1 00:07:25.342 23:40:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.342 23:40:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.342 23:40:55 -- scripts/common.sh@364 -- # decimal 1 00:07:25.342 23:40:55 -- scripts/common.sh@352 -- # local d=1 00:07:25.342 23:40:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.342 23:40:55 -- scripts/common.sh@354 -- # echo 1 00:07:25.342 23:40:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.342 23:40:55 -- scripts/common.sh@365 -- # decimal 2 00:07:25.342 23:40:55 -- scripts/common.sh@352 -- # local d=2 00:07:25.342 23:40:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.342 23:40:55 -- scripts/common.sh@354 -- # echo 2 00:07:25.342 23:40:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.342 23:40:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.342 23:40:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.342 23:40:55 -- scripts/common.sh@367 -- # return 0 00:07:25.342 23:40:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.342 23:40:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.342 --rc genhtml_branch_coverage=1 00:07:25.342 --rc genhtml_function_coverage=1 00:07:25.342 --rc genhtml_legend=1 00:07:25.342 --rc geninfo_all_blocks=1 00:07:25.342 --rc geninfo_unexecuted_blocks=1 00:07:25.342 00:07:25.342 ' 00:07:25.342 23:40:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.342 --rc genhtml_branch_coverage=1 00:07:25.342 --rc genhtml_function_coverage=1 00:07:25.342 --rc genhtml_legend=1 00:07:25.342 --rc geninfo_all_blocks=1 00:07:25.342 --rc geninfo_unexecuted_blocks=1 00:07:25.342 00:07:25.342 ' 00:07:25.342 23:40:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.342 --rc genhtml_branch_coverage=1 00:07:25.342 --rc genhtml_function_coverage=1 00:07:25.342 --rc genhtml_legend=1 00:07:25.342 --rc geninfo_all_blocks=1 00:07:25.342 --rc geninfo_unexecuted_blocks=1 00:07:25.342 00:07:25.342 ' 00:07:25.342 23:40:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.342 --rc genhtml_branch_coverage=1 00:07:25.342 --rc genhtml_function_coverage=1 00:07:25.342 --rc genhtml_legend=1 00:07:25.342 --rc geninfo_all_blocks=1 00:07:25.342 --rc geninfo_unexecuted_blocks=1 00:07:25.342 00:07:25.342 ' 00:07:25.342 23:40:55 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:25.342 23:40:55 -- bdev/nbd_common.sh@6 -- # set -e 00:07:25.342 23:40:55 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:25.342 23:40:55 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.342 23:40:55 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:25.342 23:40:55 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:25.342 23:40:55 -- bdev/blockdev.sh@18 -- # : 00:07:25.342 23:40:55 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:25.342 23:40:55 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:25.342 23:40:55 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:25.342 23:40:55 -- bdev/blockdev.sh@672 -- # uname -s 00:07:25.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.342 23:40:55 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:25.342 23:40:55 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:25.342 23:40:55 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:25.342 23:40:55 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:25.342 23:40:55 -- bdev/blockdev.sh@682 -- # dek= 00:07:25.342 23:40:55 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:25.342 23:40:55 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:25.342 23:40:55 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:25.342 23:40:55 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:25.342 23:40:55 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:25.342 23:40:55 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:25.342 23:40:55 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60210 00:07:25.342 23:40:55 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:25.342 23:40:55 -- bdev/blockdev.sh@47 -- # waitforlisten 60210 00:07:25.342 23:40:55 -- common/autotest_common.sh@829 -- # '[' -z 60210 ']' 00:07:25.342 23:40:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.342 23:40:55 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:25.342 23:40:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.342 23:40:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.342 23:40:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.342 23:40:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.342 [2024-12-13 23:40:55.959260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.342 [2024-12-13 23:40:55.959369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60210 ] 00:07:25.598 [2024-12-13 23:40:56.106622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.598 [2024-12-13 23:40:56.288341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.598 [2024-12-13 23:40:56.288579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.970 23:40:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.970 23:40:57 -- common/autotest_common.sh@862 -- # return 0 00:07:26.970 23:40:57 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:26.970 23:40:57 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:26.970 23:40:57 -- bdev/blockdev.sh@79 -- # local json 00:07:26.970 23:40:57 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:26.970 23:40:57 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:26.970 23:40:57 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:26.970 23:40:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.970 23:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.227 23:40:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.227 23:40:57 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:27.227 23:40:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.227 23:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 23:40:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.228 23:40:57 -- bdev/blockdev.sh@738 -- # cat 00:07:27.228 23:40:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:27.228 23:40:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.228 23:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 23:40:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.228 23:40:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:27.228 23:40:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.228 23:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 23:40:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.228 23:40:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:27.228 23:40:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.228 23:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 23:40:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.228 23:40:57 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:27.228 23:40:57 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:27.228 23:40:57 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:27.228 23:40:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.228 23:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.228 23:40:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.228 23:40:57 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:27.228 23:40:57 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:27.228 23:40:57 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "16f56560-6853-4d40-ae8d-98ad903be5da"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "16f56560-6853-4d40-ae8d-98ad903be5da",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "cf265526-3ddf-4c4a-934d-ef8973745f53"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cf265526-3ddf-4c4a-934d-ef8973745f53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fd7458ff-c3a1-4ce4-be57-2ec202542288"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd7458ff-c3a1-4ce4-be57-2ec202542288",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5ededc9f-ddf2-4a28-abb3-430ffcb0616c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5ededc9f-ddf2-4a28-abb3-430ffcb0616c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "52727634-e12e-4016-aa4f-b42e47f71fc9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52727634-e12e-4016-aa4f-b42e47f71fc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e897be55-b8c6-41b1-a066-827b9bed14aa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e897be55-b8c6-41b1-a066-827b9bed14aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:27.228 23:40:57 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:27.228 23:40:57 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:27.228 23:40:57 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:27.228 23:40:57 -- bdev/blockdev.sh@752 -- # killprocess 60210 00:07:27.228 23:40:57 -- common/autotest_common.sh@936 -- # '[' -z 60210 ']' 00:07:27.228 23:40:57 -- common/autotest_common.sh@940 -- # kill -0 60210 00:07:27.228 23:40:57 -- common/autotest_common.sh@941 -- # uname 00:07:27.228 23:40:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:27.228 23:40:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60210 00:07:27.228 23:40:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:27.228 killing process with pid 60210 00:07:27.228 23:40:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:27.228 23:40:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60210' 00:07:27.228 23:40:57 -- common/autotest_common.sh@955 -- # kill 60210 00:07:27.228 23:40:57 -- common/autotest_common.sh@960 -- # wait 60210 00:07:28.600 23:40:59 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:28.600 23:40:59 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:28.600 23:40:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:28.600 23:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.600 23:40:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.600 ************************************ 00:07:28.600 START TEST bdev_hello_world 00:07:28.600 ************************************ 00:07:28.600 23:40:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:28.859 [2024-12-13 23:40:59.376764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.859 [2024-12-13 23:40:59.376850] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:07:28.859 [2024-12-13 23:40:59.508036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.117 [2024-12-13 23:40:59.652406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.681 [2024-12-13 23:41:00.115992] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:29.681 [2024-12-13 23:41:00.116039] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:29.681 [2024-12-13 23:41:00.116056] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:29.681 [2024-12-13 23:41:00.118042] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:29.681 [2024-12-13 23:41:00.118618] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:29.681 [2024-12-13 23:41:00.118643] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:29.681 [2024-12-13 23:41:00.118773] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:29.681 00:07:29.681 [2024-12-13 23:41:00.118794] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:30.271 00:07:30.271 real 0m1.426s 00:07:30.271 user 0m1.170s 00:07:30.271 sys 0m0.151s 00:07:30.271 23:41:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.271 23:41:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.271 ************************************ 00:07:30.271 END TEST bdev_hello_world 00:07:30.271 ************************************ 00:07:30.271 23:41:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:30.271 23:41:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:30.271 23:41:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.271 23:41:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.271 ************************************ 00:07:30.271 START TEST bdev_bounds 00:07:30.271 ************************************ 00:07:30.271 23:41:00 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:30.271 23:41:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=60338 00:07:30.271 23:41:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.271 Process bdevio pid: 60338 00:07:30.271 23:41:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60338' 00:07:30.271 23:41:00 -- bdev/blockdev.sh@291 -- # waitforlisten 60338 00:07:30.271 23:41:00 -- common/autotest_common.sh@829 -- # '[' -z 60338 ']' 00:07:30.271 23:41:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.271 23:41:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.271 23:41:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.271 23:41:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.271 23:41:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.271 23:41:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:30.271 [2024-12-13 23:41:00.849510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.271 [2024-12-13 23:41:00.849621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:07:30.271 [2024-12-13 23:41:00.995796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.529 [2024-12-13 23:41:01.162745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.529 [2024-12-13 23:41:01.162807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.529 [2024-12-13 23:41:01.162808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.911 23:41:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.911 23:41:02 -- common/autotest_common.sh@862 -- # return 0 00:07:31.911 23:41:02 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:31.911 I/O targets: 00:07:31.911 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:31.911 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:31.911 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:31.911 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:31.911 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:31.911 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:31.911 00:07:31.911 00:07:31.911 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.911 http://cunit.sourceforge.net/ 00:07:31.911 00:07:31.911 00:07:31.911 Suite: bdevio tests on: Nvme3n1 00:07:31.911 Test: blockdev write read block ...passed 00:07:31.911 Test: blockdev write zeroes read block ...passed 00:07:31.911 Test: blockdev write zeroes read no split ...passed 00:07:31.911 Test: blockdev write zeroes read split ...passed 00:07:31.911 Test: blockdev write zeroes read split partial ...passed 00:07:31.911 Test: blockdev reset ...[2024-12-13 23:41:02.459711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:31.911 passed 00:07:31.911 Test: blockdev write read 8 blocks ...[2024-12-13 23:41:02.463615] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:31.911 passed 00:07:31.911 Test: blockdev write read size > 128k ...passed 00:07:31.911 Test: blockdev write read invalid size ...passed 00:07:31.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.911 Test: blockdev write read max offset ...passed 00:07:31.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.911 Test: blockdev writev readv 8 blocks ...passed 00:07:31.911 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.911 Test: blockdev writev readv block ...passed 00:07:31.911 Test: blockdev writev readv size > 128k ...passed 00:07:31.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.911 Test: blockdev comparev and writev ...[2024-12-13 23:41:02.481405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27660e000 len:0x1000 00:07:31.911 [2024-12-13 23:41:02.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.911 passed 00:07:31.911 Test: blockdev nvme passthru rw ...passed 00:07:31.911 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:41:02.484137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:31.911 [2024-12-13 23:41:02.484167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.911 passed 00:07:31.911 Test: blockdev nvme admin passthru ...passed 00:07:31.911 Test: blockdev copy ...passed 00:07:31.911 Suite: bdevio tests on: Nvme2n3 00:07:31.911 Test: blockdev write read block ...passed 00:07:31.911 Test: blockdev write zeroes read block ...passed 00:07:31.911 Test: blockdev write zeroes read no split ...passed 00:07:31.911 Test: blockdev write zeroes read split ...passed 00:07:31.911 Test: blockdev write zeroes read split partial ...passed 00:07:31.911 Test: blockdev reset ...[2024-12-13 23:41:02.543794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:31.911 [2024-12-13 23:41:02.546541] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:31.911 passed 00:07:31.911 Test: blockdev write read 8 blocks ...passed 00:07:31.911 Test: blockdev write read size > 128k ...passed 00:07:31.911 Test: blockdev write read invalid size ...passed 00:07:31.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.911 Test: blockdev write read max offset ...passed 00:07:31.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.911 Test: blockdev writev readv 8 blocks ...passed 00:07:31.911 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.911 Test: blockdev writev readv block ...passed 00:07:31.911 Test: blockdev writev readv size > 128k ...passed 00:07:31.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.911 Test: blockdev comparev and writev ...[2024-12-13 23:41:02.553414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27660a000 len:0x1000 00:07:31.911 [2024-12-13 23:41:02.553463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.911 passed 00:07:31.911 Test: blockdev nvme passthru rw ...passed 00:07:31.911 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.911 Test: blockdev nvme admin passthru ...[2024-12-13 23:41:02.554031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:31.911 [2024-12-13 23:41:02.554056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.911 passed 00:07:31.911 Test: blockdev copy ...passed 00:07:31.911 Suite: bdevio tests on: Nvme2n2 00:07:31.911 Test: blockdev write read block ...passed 00:07:31.911 Test: blockdev write zeroes read block ...passed 00:07:31.911 Test: blockdev write zeroes read no split ...passed 00:07:31.911 Test: blockdev write zeroes read split ...passed 00:07:31.911 Test: blockdev write zeroes read split partial ...passed 00:07:31.911 Test: blockdev reset ...[2024-12-13 23:41:02.615441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:31.911 [2024-12-13 23:41:02.619989] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:31.911 passed 00:07:31.911 Test: blockdev write read 8 blocks ...passed 00:07:31.911 Test: blockdev write read size > 128k ...passed 00:07:31.911 Test: blockdev write read invalid size ...passed 00:07:31.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.911 Test: blockdev write read max offset ...passed 00:07:31.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.911 Test: blockdev writev readv 8 blocks ...passed 00:07:31.911 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.911 Test: blockdev writev readv block ...passed 00:07:31.911 Test: blockdev writev readv size > 128k ...passed 00:07:31.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.911 Test: blockdev comparev and writev ...[2024-12-13 23:41:02.637563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270206000 len:0x1000 00:07:31.911 [2024-12-13 23:41:02.637603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.911 passed 00:07:31.911 Test: blockdev nvme passthru rw ...passed 00:07:31.911 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:41:02.638365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:31.911 [2024-12-13 23:41:02.638389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.912 passed 00:07:32.172 Test: blockdev nvme admin passthru ...passed 00:07:32.172 Test: blockdev copy ...passed 00:07:32.172 Suite: bdevio tests on: Nvme2n1 00:07:32.172 Test: blockdev write read block ...passed 00:07:32.172 Test: blockdev write zeroes read block ...passed 00:07:32.172 Test: blockdev write zeroes read no split ...passed 00:07:32.172 Test: blockdev write zeroes read split ...passed 00:07:32.172 Test: blockdev write zeroes read split partial ...passed 00:07:32.172 Test: blockdev reset ...[2024-12-13 23:41:02.694753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:32.172 [2024-12-13 23:41:02.698007] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:32.172 passed 00:07:32.172 Test: blockdev write read 8 blocks ...passed 00:07:32.172 Test: blockdev write read size > 128k ...passed 00:07:32.172 Test: blockdev write read invalid size ...passed 00:07:32.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:32.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:32.172 Test: blockdev write read max offset ...passed 00:07:32.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:32.172 Test: blockdev writev readv 8 blocks ...passed 00:07:32.172 Test: blockdev writev readv 30 x 1block ...passed 00:07:32.172 Test: blockdev writev readv block ...passed 00:07:32.172 Test: blockdev writev readv size > 128k ...passed 00:07:32.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:32.172 Test: blockdev comparev and writev ...[2024-12-13 23:41:02.710850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270201000 len:0x1000 00:07:32.172 [2024-12-13 23:41:02.710887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:32.172 passed 00:07:32.172 Test: blockdev nvme passthru rw ...passed 00:07:32.172 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:41:02.712518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:32.172 [2024-12-13 23:41:02.712546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:32.172 passed 00:07:32.172 Test: blockdev nvme admin passthru ...passed 00:07:32.172 Test: blockdev copy ...passed 00:07:32.172 Suite: bdevio tests on: Nvme1n1 00:07:32.172 Test: blockdev write read block ...passed 00:07:32.172 Test: blockdev write zeroes read block ...passed 00:07:32.172 Test: blockdev write zeroes read no split ...passed 00:07:32.172 Test: blockdev write zeroes read split ...passed 00:07:32.172 Test: blockdev write zeroes read split partial ...passed 00:07:32.172 Test: blockdev reset ...[2024-12-13 23:41:02.765149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:32.172 [2024-12-13 23:41:02.768522] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:32.172 passed 00:07:32.172 Test: blockdev write read 8 blocks ...passed 00:07:32.172 Test: blockdev write read size > 128k ...passed 00:07:32.172 Test: blockdev write read invalid size ...passed 00:07:32.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:32.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:32.172 Test: blockdev write read max offset ...passed 00:07:32.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:32.172 Test: blockdev writev readv 8 blocks ...passed 00:07:32.172 Test: blockdev writev readv 30 x 1block ...passed 00:07:32.172 Test: blockdev writev readv block ...passed 00:07:32.172 Test: blockdev writev readv size > 128k ...passed 00:07:32.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:32.172 Test: blockdev comparev and writev ...[2024-12-13 23:41:02.783471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271006000 len:0x1000 00:07:32.172 [2024-12-13 23:41:02.783534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:32.172 passed 00:07:32.172 Test: blockdev nvme passthru rw ...passed 00:07:32.172 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:41:02.784924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:32.172 [2024-12-13 23:41:02.784956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:32.172 passed 00:07:32.172 Test: blockdev nvme admin passthru ...passed 00:07:32.172 Test: blockdev copy ...passed 00:07:32.172 Suite: bdevio tests on: Nvme0n1 00:07:32.172 Test: blockdev write read block ...passed 00:07:32.172 Test: blockdev write zeroes read block ...passed 00:07:32.172 Test: blockdev write zeroes read no split ...passed 00:07:32.172 Test: blockdev write zeroes read split ...passed 00:07:32.172 Test: blockdev write zeroes read split partial ...passed 00:07:32.172 Test: blockdev reset ...[2024-12-13 23:41:02.834579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:32.172 [2024-12-13 23:41:02.838437] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:32.172 passed 00:07:32.172 Test: blockdev write read 8 blocks ...passed 00:07:32.172 Test: blockdev write read size > 128k ...passed 00:07:32.172 Test: blockdev write read invalid size ...passed 00:07:32.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:32.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:32.173 Test: blockdev write read max offset ...passed 00:07:32.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:32.173 Test: blockdev writev readv 8 blocks ...passed 00:07:32.173 Test: blockdev writev readv 30 x 1block ...passed 00:07:32.173 Test: blockdev writev readv block ...passed 00:07:32.173 Test: blockdev writev readv size > 128k ...passed 00:07:32.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:32.173 Test: blockdev comparev and writev ...[2024-12-13 23:41:02.854734] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:32.173 separate metadata which is not supported yet. 00:07:32.173 passed 00:07:32.173 Test: blockdev nvme passthru rw ...passed 00:07:32.173 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:41:02.855903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:32.173 [2024-12-13 23:41:02.855937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:32.173 passed 00:07:32.173 Test: blockdev nvme admin passthru ...passed 00:07:32.173 Test: blockdev copy ...passed 00:07:32.173 00:07:32.173 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.173 suites 6 6 n/a 0 0 00:07:32.173 tests 138 138 138 0 0 00:07:32.173 asserts 893 893 893 0 n/a 00:07:32.173 00:07:32.173 Elapsed time = 1.186 seconds 00:07:32.173 0 00:07:32.173 23:41:02 -- bdev/blockdev.sh@293 -- # killprocess 60338 00:07:32.173 23:41:02 -- common/autotest_common.sh@936 -- # '[' -z 60338 ']' 00:07:32.173 23:41:02 -- common/autotest_common.sh@940 -- # kill -0 60338 00:07:32.173 23:41:02 -- common/autotest_common.sh@941 -- # uname 00:07:32.173 23:41:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.173 23:41:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60338 00:07:32.435 23:41:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.435 23:41:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.435 killing process with pid 60338 00:07:32.435 23:41:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60338' 00:07:32.435 23:41:02 -- common/autotest_common.sh@955 -- # kill 60338 00:07:32.435 23:41:02 -- common/autotest_common.sh@960 -- # wait 60338 00:07:33.007 23:41:03 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:33.007 00:07:33.007 real 0m2.809s 00:07:33.007 user 0m7.322s 00:07:33.007 sys 0m0.287s 00:07:33.007 23:41:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.007 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.007 ************************************ 00:07:33.007 END TEST bdev_bounds 00:07:33.007 ************************************ 00:07:33.007 23:41:03 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:33.007 23:41:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:33.007 23:41:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.007 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.007 ************************************ 00:07:33.007 START TEST bdev_nbd 00:07:33.007 ************************************ 00:07:33.007 23:41:03 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:33.007 23:41:03 -- bdev/blockdev.sh@298 -- # uname -s 00:07:33.007 23:41:03 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:33.007 23:41:03 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.007 23:41:03 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:33.007 23:41:03 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:33.007 23:41:03 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:33.007 23:41:03 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:33.007 23:41:03 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:33.007 23:41:03 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:33.007 23:41:03 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:33.007 23:41:03 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:33.007 23:41:03 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.007 23:41:03 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:33.007 23:41:03 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:33.007 23:41:03 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:33.007 23:41:03 -- bdev/blockdev.sh@316 -- # nbd_pid=60400 00:07:33.007 23:41:03 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:33.007 23:41:03 -- bdev/blockdev.sh@318 -- # waitforlisten 60400 /var/tmp/spdk-nbd.sock 00:07:33.007 23:41:03 -- common/autotest_common.sh@829 -- # '[' -z 60400 ']' 00:07:33.007 23:41:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.007 23:41:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.007 23:41:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.007 23:41:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.007 23:41:03 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:33.007 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.007 [2024-12-13 23:41:03.719547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.007 [2024-12-13 23:41:03.719656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.268 [2024-12-13 23:41:03.869733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.529 [2024-12-13 23:41:04.048648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.468 23:41:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.468 23:41:05 -- common/autotest_common.sh@862 -- # return 0 00:07:34.468 23:41:05 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:34.468 23:41:05 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.468 23:41:05 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.468 23:41:05 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:34.468 23:41:05 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:34.468 23:41:05 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.468 23:41:05 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.469 23:41:05 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:34.469 23:41:05 -- bdev/nbd_common.sh@24 -- # local i 00:07:34.469 23:41:05 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:34.469 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:34.469 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:34.469 23:41:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:34.729 23:41:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:34.729 23:41:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:34.729 23:41:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:34.729 23:41:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:34.729 23:41:05 -- common/autotest_common.sh@867 -- # local i 00:07:34.729 23:41:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.729 23:41:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.729 23:41:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:34.729 23:41:05 -- common/autotest_common.sh@871 -- # break 00:07:34.729 23:41:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.729 23:41:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.729 23:41:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.729 1+0 records in 00:07:34.729 1+0 records out 00:07:34.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544926 s, 7.5 MB/s 00:07:34.729 23:41:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.729 23:41:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:34.729 23:41:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.729 23:41:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.729 23:41:05 -- common/autotest_common.sh@887 -- # return 0 00:07:34.729 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:34.729 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:34.729 23:41:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:34.989 23:41:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:34.989 23:41:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:34.989 23:41:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:34.989 23:41:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:34.989 23:41:05 -- common/autotest_common.sh@867 -- # local i 00:07:34.989 23:41:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.989 23:41:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.989 23:41:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:34.989 23:41:05 -- common/autotest_common.sh@871 -- # break 00:07:34.989 23:41:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.989 23:41:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.989 23:41:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.989 1+0 records in 00:07:34.989 1+0 records out 00:07:34.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529426 s, 7.7 MB/s 00:07:34.989 23:41:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.989 23:41:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:34.989 23:41:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.989 23:41:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.989 23:41:05 -- common/autotest_common.sh@887 -- # return 0 00:07:34.989 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:34.989 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:34.989 23:41:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:35.250 23:41:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:35.250 23:41:05 -- common/autotest_common.sh@867 -- # local i 00:07:35.250 23:41:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:35.250 23:41:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:35.250 23:41:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:35.250 23:41:05 -- common/autotest_common.sh@871 -- # break 00:07:35.250 23:41:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:35.250 23:41:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:35.250 23:41:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.250 1+0 records in 00:07:35.250 1+0 records out 00:07:35.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614783 s, 6.7 MB/s 00:07:35.250 23:41:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.250 23:41:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:35.250 23:41:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.250 23:41:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:35.250 23:41:05 -- common/autotest_common.sh@887 -- # return 0 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:35.250 23:41:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:35.250 23:41:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:35.508 23:41:05 -- common/autotest_common.sh@867 -- # local i 00:07:35.508 23:41:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:35.508 23:41:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:35.508 23:41:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:35.508 23:41:05 -- common/autotest_common.sh@871 -- # break 00:07:35.508 23:41:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:35.508 23:41:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:35.508 23:41:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.508 1+0 records in 00:07:35.508 1+0 records out 00:07:35.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681362 s, 6.0 MB/s 00:07:35.508 23:41:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.508 23:41:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:35.508 23:41:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.508 23:41:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:35.508 23:41:05 -- common/autotest_common.sh@887 -- # return 0 00:07:35.508 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.508 23:41:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.508 23:41:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:35.508 23:41:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:35.508 23:41:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:35.508 23:41:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:35.508 23:41:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:35.508 23:41:06 -- common/autotest_common.sh@867 -- # local i 00:07:35.508 23:41:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:35.508 23:41:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:35.508 23:41:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:35.508 23:41:06 -- common/autotest_common.sh@871 -- # break 00:07:35.508 23:41:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:35.508 23:41:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:35.508 23:41:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.508 1+0 records in 00:07:35.508 1+0 records out 00:07:35.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117727 s, 3.5 MB/s 00:07:35.508 23:41:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.508 23:41:06 -- common/autotest_common.sh@884 -- # size=4096 00:07:35.508 23:41:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.508 23:41:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:35.508 23:41:06 -- common/autotest_common.sh@887 -- # return 0 00:07:35.508 23:41:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.508 23:41:06 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.508 23:41:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:35.768 23:41:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:35.768 23:41:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:35.768 23:41:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:35.768 23:41:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:35.768 23:41:06 -- common/autotest_common.sh@867 -- # local i 00:07:35.768 23:41:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:35.768 23:41:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:35.768 23:41:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:35.768 23:41:06 -- common/autotest_common.sh@871 -- # break 00:07:35.768 23:41:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:35.768 23:41:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:35.768 23:41:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.768 1+0 records in 00:07:35.768 1+0 records out 00:07:35.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787209 s, 5.2 MB/s 00:07:35.768 23:41:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.768 23:41:06 -- common/autotest_common.sh@884 -- # size=4096 00:07:35.768 23:41:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.768 23:41:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:35.768 23:41:06 -- common/autotest_common.sh@887 -- # return 0 00:07:35.768 23:41:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.768 23:41:06 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.768 23:41:06 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd0", 00:07:36.028 "bdev_name": "Nvme0n1" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd1", 00:07:36.028 "bdev_name": "Nvme1n1" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd2", 00:07:36.028 "bdev_name": "Nvme2n1" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd3", 00:07:36.028 "bdev_name": "Nvme2n2" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd4", 00:07:36.028 "bdev_name": "Nvme2n3" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd5", 00:07:36.028 "bdev_name": "Nvme3n1" 00:07:36.028 } 00:07:36.028 ]' 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd0", 00:07:36.028 "bdev_name": "Nvme0n1" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd1", 00:07:36.028 "bdev_name": "Nvme1n1" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd2", 00:07:36.028 "bdev_name": "Nvme2n1" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd3", 00:07:36.028 "bdev_name": "Nvme2n2" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd4", 00:07:36.028 "bdev_name": "Nvme2n3" 00:07:36.028 }, 00:07:36.028 { 00:07:36.028 "nbd_device": "/dev/nbd5", 00:07:36.028 "bdev_name": "Nvme3n1" 00:07:36.028 } 00:07:36.028 ]' 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@51 -- # local i 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.028 23:41:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@41 -- # break 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.288 23:41:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@41 -- # break 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@41 -- # break 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.549 23:41:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@41 -- # break 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.809 23:41:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@41 -- # break 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.069 23:41:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@41 -- # break 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.070 23:41:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@65 -- # true 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@65 -- # count=0 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@122 -- # count=0 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@127 -- # return 0 00:07:37.330 23:41:07 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@12 -- # local i 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.330 23:41:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:37.330 23:41:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:37.591 /dev/nbd0 00:07:37.591 23:41:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.591 23:41:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.591 23:41:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:37.591 23:41:08 -- common/autotest_common.sh@867 -- # local i 00:07:37.591 23:41:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.591 23:41:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.591 23:41:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:37.591 23:41:08 -- common/autotest_common.sh@871 -- # break 00:07:37.591 23:41:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.591 23:41:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.591 23:41:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.591 1+0 records in 00:07:37.591 1+0 records out 00:07:37.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433248 s, 9.5 MB/s 00:07:37.591 23:41:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.591 23:41:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.591 23:41:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.591 23:41:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.591 23:41:08 -- common/autotest_common.sh@887 -- # return 0 00:07:37.591 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.591 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:37.591 23:41:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:37.852 /dev/nbd1 00:07:37.852 23:41:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:37.852 23:41:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:37.852 23:41:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:37.852 23:41:08 -- common/autotest_common.sh@867 -- # local i 00:07:37.852 23:41:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.852 23:41:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.852 23:41:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:37.852 23:41:08 -- common/autotest_common.sh@871 -- # break 00:07:37.852 23:41:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.852 23:41:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.852 23:41:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.852 1+0 records in 00:07:37.852 1+0 records out 00:07:37.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766308 s, 5.3 MB/s 00:07:37.852 23:41:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.852 23:41:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.852 23:41:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.852 23:41:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.852 23:41:08 -- common/autotest_common.sh@887 -- # return 0 00:07:37.852 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.852 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:37.852 23:41:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:38.113 /dev/nbd10 00:07:38.113 23:41:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:38.113 23:41:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:38.113 23:41:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:38.113 23:41:08 -- common/autotest_common.sh@867 -- # local i 00:07:38.113 23:41:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:38.114 23:41:08 -- common/autotest_common.sh@871 -- # break 00:07:38.114 23:41:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.114 1+0 records in 00:07:38.114 1+0 records out 00:07:38.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010291 s, 4.0 MB/s 00:07:38.114 23:41:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.114 23:41:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:38.114 23:41:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.114 23:41:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:38.114 23:41:08 -- common/autotest_common.sh@887 -- # return 0 00:07:38.114 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.114 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:38.114 23:41:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:38.114 /dev/nbd11 00:07:38.114 23:41:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:38.114 23:41:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:38.114 23:41:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:38.114 23:41:08 -- common/autotest_common.sh@867 -- # local i 00:07:38.114 23:41:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:38.114 23:41:08 -- common/autotest_common.sh@871 -- # break 00:07:38.114 23:41:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:38.114 23:41:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.374 1+0 records in 00:07:38.374 1+0 records out 00:07:38.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125971 s, 3.3 MB/s 00:07:38.374 23:41:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.374 23:41:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:38.374 23:41:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.374 23:41:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:38.374 23:41:08 -- common/autotest_common.sh@887 -- # return 0 00:07:38.374 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.374 23:41:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:38.374 23:41:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:38.374 /dev/nbd12 00:07:38.374 23:41:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:38.374 23:41:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:38.374 23:41:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:38.374 23:41:09 -- common/autotest_common.sh@867 -- # local i 00:07:38.374 23:41:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:38.374 23:41:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:38.374 23:41:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:38.374 23:41:09 -- common/autotest_common.sh@871 -- # break 00:07:38.374 23:41:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:38.374 23:41:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:38.374 23:41:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.374 1+0 records in 00:07:38.374 1+0 records out 00:07:38.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104006 s, 3.9 MB/s 00:07:38.374 23:41:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.374 23:41:09 -- common/autotest_common.sh@884 -- # size=4096 00:07:38.374 23:41:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.374 23:41:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:38.374 23:41:09 -- common/autotest_common.sh@887 -- # return 0 00:07:38.374 23:41:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.374 23:41:09 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:38.374 23:41:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:38.633 /dev/nbd13 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:38.633 23:41:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:38.633 23:41:09 -- common/autotest_common.sh@867 -- # local i 00:07:38.633 23:41:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:38.633 23:41:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:38.633 23:41:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:38.633 23:41:09 -- common/autotest_common.sh@871 -- # break 00:07:38.633 23:41:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:38.633 23:41:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:38.633 23:41:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.633 1+0 records in 00:07:38.633 1+0 records out 00:07:38.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101573 s, 4.0 MB/s 00:07:38.633 23:41:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.633 23:41:09 -- common/autotest_common.sh@884 -- # size=4096 00:07:38.633 23:41:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.633 23:41:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:38.633 23:41:09 -- common/autotest_common.sh@887 -- # return 0 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.633 23:41:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd0", 00:07:38.893 "bdev_name": "Nvme0n1" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd1", 00:07:38.893 "bdev_name": "Nvme1n1" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd10", 00:07:38.893 "bdev_name": "Nvme2n1" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd11", 00:07:38.893 "bdev_name": "Nvme2n2" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd12", 00:07:38.893 "bdev_name": "Nvme2n3" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd13", 00:07:38.893 "bdev_name": "Nvme3n1" 00:07:38.893 } 00:07:38.893 ]' 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd0", 00:07:38.893 "bdev_name": "Nvme0n1" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd1", 00:07:38.893 "bdev_name": "Nvme1n1" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd10", 00:07:38.893 "bdev_name": "Nvme2n1" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd11", 00:07:38.893 "bdev_name": "Nvme2n2" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd12", 00:07:38.893 "bdev_name": "Nvme2n3" 00:07:38.893 }, 00:07:38.893 { 00:07:38.893 "nbd_device": "/dev/nbd13", 00:07:38.893 "bdev_name": "Nvme3n1" 00:07:38.893 } 00:07:38.893 ]' 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:38.893 /dev/nbd1 00:07:38.893 /dev/nbd10 00:07:38.893 /dev/nbd11 00:07:38.893 /dev/nbd12 00:07:38.893 /dev/nbd13' 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:38.893 /dev/nbd1 00:07:38.893 /dev/nbd10 00:07:38.893 /dev/nbd11 00:07:38.893 /dev/nbd12 00:07:38.893 /dev/nbd13' 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@65 -- # count=6 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@95 -- # count=6 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:38.893 23:41:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:38.894 23:41:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:38.894 256+0 records in 00:07:38.894 256+0 records out 00:07:38.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00751726 s, 139 MB/s 00:07:38.894 23:41:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.894 23:41:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.153 256+0 records in 00:07:39.153 256+0 records out 00:07:39.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199783 s, 5.2 MB/s 00:07:39.153 23:41:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.153 23:41:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.410 256+0 records in 00:07:39.410 256+0 records out 00:07:39.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132902 s, 7.9 MB/s 00:07:39.410 23:41:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.410 23:41:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:39.410 256+0 records in 00:07:39.410 256+0 records out 00:07:39.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114338 s, 9.2 MB/s 00:07:39.410 23:41:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.410 23:41:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:39.667 256+0 records in 00:07:39.667 256+0 records out 00:07:39.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145925 s, 7.2 MB/s 00:07:39.667 23:41:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.667 23:41:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:39.667 256+0 records in 00:07:39.667 256+0 records out 00:07:39.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189799 s, 5.5 MB/s 00:07:39.667 23:41:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.667 23:41:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:39.930 256+0 records in 00:07:39.930 256+0 records out 00:07:39.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17638 s, 5.9 MB/s 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@51 -- # local i 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.930 23:41:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@41 -- # break 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.189 23:41:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@41 -- # break 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.450 23:41:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@41 -- # break 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.450 23:41:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:40.709 23:41:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:40.709 23:41:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:40.709 23:41:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:40.709 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.709 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.709 23:41:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:40.710 23:41:11 -- bdev/nbd_common.sh@41 -- # break 00:07:40.710 23:41:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.710 23:41:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.710 23:41:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@41 -- # break 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.993 23:41:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@41 -- # break 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.253 23:41:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@65 -- # true 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.514 23:41:11 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:41.514 23:41:11 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:41.514 malloc_lvol_verify 00:07:41.514 23:41:12 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:41.773 fe991e25-b102-4420-bb5c-002a92adafaf 00:07:41.773 23:41:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:42.031 9b86b77a-89a6-471b-8f54-2ad6cb430557 00:07:42.031 23:41:12 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:42.291 /dev/nbd0 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:42.291 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:07:42.291  done 00:07:42.291 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:42.291 00:07:42.291 Allocating group tables: 0/1 done 00:07:42.291 Writing inode tables: 0/1 done 00:07:42.291 Creating journal (1024 blocks): done 00:07:42.291 Writing superblocks and filesystem accounting information: 0/1 done 00:07:42.291 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@51 -- # local i 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.291 23:41:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@41 -- # break 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:42.549 23:41:13 -- bdev/nbd_common.sh@147 -- # return 0 00:07:42.549 23:41:13 -- bdev/blockdev.sh@324 -- # killprocess 60400 00:07:42.549 23:41:13 -- common/autotest_common.sh@936 -- # '[' -z 60400 ']' 00:07:42.549 23:41:13 -- common/autotest_common.sh@940 -- # kill -0 60400 00:07:42.549 23:41:13 -- common/autotest_common.sh@941 -- # uname 00:07:42.549 23:41:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:42.549 23:41:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60400 00:07:42.549 killing process with pid 60400 00:07:42.549 23:41:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:42.549 23:41:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:42.549 23:41:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60400' 00:07:42.549 23:41:13 -- common/autotest_common.sh@955 -- # kill 60400 00:07:42.549 23:41:13 -- common/autotest_common.sh@960 -- # wait 60400 00:07:43.489 23:41:13 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:43.489 00:07:43.489 real 0m10.261s 00:07:43.489 user 0m14.029s 00:07:43.489 sys 0m3.005s 00:07:43.489 23:41:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.489 ************************************ 00:07:43.489 END TEST bdev_nbd 00:07:43.489 23:41:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.489 ************************************ 00:07:43.489 23:41:13 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:43.489 23:41:13 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:43.489 skipping fio tests on NVMe due to multi-ns failures. 00:07:43.489 23:41:13 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:43.489 23:41:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:43.489 23:41:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:43.489 23:41:13 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:43.489 23:41:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.489 23:41:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.489 ************************************ 00:07:43.489 START TEST bdev_verify 00:07:43.489 ************************************ 00:07:43.489 23:41:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:43.489 [2024-12-13 23:41:14.049504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.489 [2024-12-13 23:41:14.049618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60781 ] 00:07:43.489 [2024-12-13 23:41:14.199657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.750 [2024-12-13 23:41:14.380023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.750 [2024-12-13 23:41:14.380105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.321 Running I/O for 5 seconds... 00:07:49.610 00:07:49.611 Latency(us) 00:07:49.611 [2024-12-13T23:41:20.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x0 length 0xbd0bd 00:07:49.611 Nvme0n1 : 5.05 2663.08 10.40 0.00 0.00 47896.59 13409.67 72997.02 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:49.611 Nvme0n1 : 5.07 2711.16 10.59 0.00 0.00 46858.65 1310.72 52025.50 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x0 length 0xa0000 00:07:49.611 Nvme1n1 : 5.05 2661.98 10.40 0.00 0.00 47890.60 12855.14 70980.53 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0xa0000 length 0xa0000 00:07:49.611 Nvme1n1 : 5.07 2709.78 10.59 0.00 0.00 46814.88 3276.80 47992.52 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x0 length 0x80000 00:07:49.611 Nvme2n1 : 5.06 2666.91 10.42 0.00 0.00 47711.21 4310.25 62914.56 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x80000 length 0x80000 00:07:49.611 Nvme2n1 : 5.06 2710.23 10.59 0.00 0.00 47098.44 9175.04 65334.35 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x0 length 0x80000 00:07:49.611 Nvme2n2 : 5.06 2664.87 10.41 0.00 0.00 47665.59 7410.61 60494.77 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x80000 length 0x80000 00:07:49.611 Nvme2n2 : 5.06 2709.39 10.58 0.00 0.00 47018.50 8065.97 57671.68 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x0 length 0x80000 00:07:49.611 Nvme2n3 : 5.07 2668.46 10.42 0.00 0.00 47551.83 2180.33 56865.08 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x80000 length 0x80000 00:07:49.611 Nvme2n3 : 5.06 2707.33 10.58 0.00 0.00 46997.61 9830.40 58478.28 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x0 length 0x20000 00:07:49.611 Nvme3n1 : 5.07 2667.83 10.42 0.00 0.00 47473.64 2772.68 57268.38 00:07:49.611 [2024-12-13T23:41:20.343Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.611 Verification LBA range: start 0x20000 length 0x20000 00:07:49.611 Nvme3n1 : 5.06 2705.30 10.57 0.00 0.00 46951.10 12855.14 56058.49 00:07:49.611 [2024-12-13T23:41:20.343Z] =================================================================================================================== 00:07:49.611 [2024-12-13T23:41:20.343Z] Total : 32246.33 125.96 0.00 0.00 47324.01 1310.72 72997.02 00:08:16.207 00:08:16.207 real 0m28.864s 00:08:16.207 user 0m36.638s 00:08:16.207 sys 0m0.520s 00:08:16.207 23:41:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.207 ************************************ 00:08:16.207 END TEST bdev_verify 00:08:16.207 ************************************ 00:08:16.207 23:41:42 -- common/autotest_common.sh@10 -- # set +x 00:08:16.207 23:41:42 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.207 23:41:42 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:16.207 23:41:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.208 23:41:42 -- common/autotest_common.sh@10 -- # set +x 00:08:16.208 ************************************ 00:08:16.208 START TEST bdev_verify_big_io 00:08:16.208 ************************************ 00:08:16.208 23:41:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.208 [2024-12-13 23:41:43.018915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.208 [2024-12-13 23:41:43.019094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60997 ] 00:08:16.208 [2024-12-13 23:41:43.176672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.208 [2024-12-13 23:41:43.409811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.208 [2024-12-13 23:41:43.409935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.208 Running I/O for 5 seconds... 00:08:19.488 00:08:19.488 Latency(us) 00:08:19.488 [2024-12-13T23:41:50.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x0 length 0xbd0b 00:08:19.488 Nvme0n1 : 5.28 331.20 20.70 0.00 0.00 381736.64 24197.91 571070.62 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:19.488 Nvme0n1 : 5.39 347.59 21.72 0.00 0.00 341075.94 1127.98 435562.34 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x0 length 0xa000 00:08:19.488 Nvme1n1 : 5.28 331.10 20.69 0.00 0.00 378689.93 24097.08 512995.64 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0xa000 length 0xa000 00:08:19.488 Nvme1n1 : 5.27 297.09 18.57 0.00 0.00 422883.66 55655.19 609787.27 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x0 length 0x8000 00:08:19.488 Nvme2n1 : 5.28 331.01 20.69 0.00 0.00 375679.67 24399.56 506542.87 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x8000 length 0x8000 00:08:19.488 Nvme2n1 : 5.28 296.95 18.56 0.00 0.00 415793.64 55655.19 535580.36 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x0 length 0x8000 00:08:19.488 Nvme2n2 : 5.28 330.91 20.68 0.00 0.00 370902.90 24399.56 451694.28 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x8000 length 0x8000 00:08:19.488 Nvme2n2 : 5.32 302.56 18.91 0.00 0.00 403272.03 42346.34 480731.77 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x0 length 0x8000 00:08:19.488 Nvme2n3 : 5.31 336.34 21.02 0.00 0.00 360382.66 28835.84 464599.83 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x8000 length 0x8000 00:08:19.488 Nvme2n3 : 5.35 319.08 19.94 0.00 0.00 380631.45 11191.53 442015.11 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x0 length 0x2000 00:08:19.488 Nvme3n1 : 5.33 351.34 21.96 0.00 0.00 341941.32 2470.20 467826.22 00:08:19.488 [2024-12-13T23:41:50.220Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:19.488 Verification LBA range: start 0x2000 length 0x2000 00:08:19.488 Nvme3n1 : 5.36 325.90 20.37 0.00 0.00 367933.94 11746.07 445241.50 00:08:19.488 [2024-12-13T23:41:50.220Z] =================================================================================================================== 00:08:19.488 [2024-12-13T23:41:50.220Z] Total : 3901.05 243.82 0.00 0.00 377049.79 1127.98 609787.27 00:08:20.867 ************************************ 00:08:20.867 END TEST bdev_verify_big_io 00:08:20.867 ************************************ 00:08:20.867 00:08:20.867 real 0m8.570s 00:08:20.867 user 0m15.182s 00:08:20.867 sys 0m0.338s 00:08:20.867 23:41:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.867 23:41:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.867 23:41:51 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.867 23:41:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:20.867 23:41:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.867 23:41:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.867 ************************************ 00:08:20.867 START TEST bdev_write_zeroes 00:08:20.867 ************************************ 00:08:20.867 23:41:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:21.124 [2024-12-13 23:41:51.617944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.124 [2024-12-13 23:41:51.618037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:08:21.124 [2024-12-13 23:41:51.755201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.381 [2024-12-13 23:41:51.937708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.949 Running I/O for 1 seconds... 00:08:22.882 00:08:22.882 Latency(us) 00:08:22.882 [2024-12-13T23:41:53.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.882 [2024-12-13T23:41:53.614Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.883 Nvme0n1 : 1.02 8406.90 32.84 0.00 0.00 15190.54 4587.52 139541.27 00:08:22.883 [2024-12-13T23:41:53.615Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.883 Nvme1n1 : 1.02 8486.75 33.15 0.00 0.00 15032.65 9074.22 127442.31 00:08:22.883 [2024-12-13T23:41:53.615Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.883 Nvme2n1 : 1.02 8477.20 33.11 0.00 0.00 14984.94 8469.27 127442.31 00:08:22.883 [2024-12-13T23:41:53.615Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.883 Nvme2n2 : 1.02 8530.27 33.32 0.00 0.00 14820.67 7965.14 127442.31 00:08:22.883 [2024-12-13T23:41:53.615Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.883 Nvme2n3 : 1.02 8457.87 33.04 0.00 0.00 14902.91 8922.98 126635.72 00:08:22.883 [2024-12-13T23:41:53.615Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.883 Nvme3n1 : 1.02 8507.74 33.23 0.00 0.00 14803.22 6553.60 126635.72 00:08:22.883 [2024-12-13T23:41:53.615Z] =================================================================================================================== 00:08:22.883 [2024-12-13T23:41:53.615Z] Total : 50866.72 198.70 0.00 0.00 14955.05 4587.52 139541.27 00:08:23.823 00:08:23.823 real 0m2.802s 00:08:23.823 user 0m2.506s 00:08:23.823 sys 0m0.180s 00:08:23.823 ************************************ 00:08:23.823 END TEST bdev_write_zeroes 00:08:23.823 ************************************ 00:08:23.823 23:41:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.823 23:41:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.823 23:41:54 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.823 23:41:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:23.823 23:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.823 23:41:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.823 ************************************ 00:08:23.823 START TEST bdev_json_nonenclosed 00:08:23.823 ************************************ 00:08:23.823 23:41:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.823 [2024-12-13 23:41:54.492079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.823 [2024-12-13 23:41:54.492205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:08:24.083 [2024-12-13 23:41:54.641756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.344 [2024-12-13 23:41:54.886305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.344 [2024-12-13 23:41:54.886522] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:24.344 [2024-12-13 23:41:54.886542] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.608 00:08:24.608 real 0m0.766s 00:08:24.608 user 0m0.541s 00:08:24.608 sys 0m0.118s 00:08:24.608 23:41:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.608 ************************************ 00:08:24.608 END TEST bdev_json_nonenclosed 00:08:24.608 ************************************ 00:08:24.608 23:41:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.608 23:41:55 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.609 23:41:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:24.609 23:41:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.609 23:41:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 ************************************ 00:08:24.609 START TEST bdev_json_nonarray 00:08:24.609 ************************************ 00:08:24.609 23:41:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.609 [2024-12-13 23:41:55.330819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.609 [2024-12-13 23:41:55.330961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:08:24.873 [2024-12-13 23:41:55.483600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.134 [2024-12-13 23:41:55.714195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.134 [2024-12-13 23:41:55.714404] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:25.134 [2024-12-13 23:41:55.714426] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.395 00:08:25.395 real 0m0.761s 00:08:25.395 user 0m0.535s 00:08:25.395 sys 0m0.119s 00:08:25.395 ************************************ 00:08:25.395 END TEST bdev_json_nonarray 00:08:25.395 ************************************ 00:08:25.395 23:41:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.395 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.395 23:41:56 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:25.395 23:41:56 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:25.395 23:41:56 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:25.395 23:41:56 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:25.395 23:41:56 -- bdev/blockdev.sh@809 -- # cleanup 00:08:25.395 23:41:56 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:25.395 23:41:56 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:25.657 23:41:56 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:25.657 23:41:56 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:25.657 23:41:56 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:25.657 23:41:56 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:25.657 00:08:25.657 real 1m0.385s 00:08:25.657 user 1m21.755s 00:08:25.657 sys 0m5.444s 00:08:25.657 ************************************ 00:08:25.657 END TEST blockdev_nvme 00:08:25.657 ************************************ 00:08:25.657 23:41:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.657 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.657 23:41:56 -- spdk/autotest.sh@206 -- # uname -s 00:08:25.657 23:41:56 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:25.657 23:41:56 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:25.657 23:41:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.657 23:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.657 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.657 ************************************ 00:08:25.657 START TEST blockdev_nvme_gpt 00:08:25.657 ************************************ 00:08:25.657 23:41:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:25.657 * Looking for test storage... 00:08:25.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:25.657 23:41:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:25.657 23:41:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:25.657 23:41:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:25.657 23:41:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:25.657 23:41:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:25.657 23:41:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:25.657 23:41:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:25.657 23:41:56 -- scripts/common.sh@335 -- # IFS=.-: 00:08:25.657 23:41:56 -- scripts/common.sh@335 -- # read -ra ver1 00:08:25.657 23:41:56 -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.657 23:41:56 -- scripts/common.sh@336 -- # read -ra ver2 00:08:25.657 23:41:56 -- scripts/common.sh@337 -- # local 'op=<' 00:08:25.657 23:41:56 -- scripts/common.sh@339 -- # ver1_l=2 00:08:25.657 23:41:56 -- scripts/common.sh@340 -- # ver2_l=1 00:08:25.657 23:41:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:25.657 23:41:56 -- scripts/common.sh@343 -- # case "$op" in 00:08:25.657 23:41:56 -- scripts/common.sh@344 -- # : 1 00:08:25.657 23:41:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:25.657 23:41:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.657 23:41:56 -- scripts/common.sh@364 -- # decimal 1 00:08:25.657 23:41:56 -- scripts/common.sh@352 -- # local d=1 00:08:25.657 23:41:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.657 23:41:56 -- scripts/common.sh@354 -- # echo 1 00:08:25.657 23:41:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:25.657 23:41:56 -- scripts/common.sh@365 -- # decimal 2 00:08:25.657 23:41:56 -- scripts/common.sh@352 -- # local d=2 00:08:25.657 23:41:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.657 23:41:56 -- scripts/common.sh@354 -- # echo 2 00:08:25.657 23:41:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:25.657 23:41:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:25.657 23:41:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:25.657 23:41:56 -- scripts/common.sh@367 -- # return 0 00:08:25.657 23:41:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.657 23:41:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.657 --rc genhtml_branch_coverage=1 00:08:25.657 --rc genhtml_function_coverage=1 00:08:25.657 --rc genhtml_legend=1 00:08:25.657 --rc geninfo_all_blocks=1 00:08:25.657 --rc geninfo_unexecuted_blocks=1 00:08:25.657 00:08:25.657 ' 00:08:25.657 23:41:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.657 --rc genhtml_branch_coverage=1 00:08:25.657 --rc genhtml_function_coverage=1 00:08:25.657 --rc genhtml_legend=1 00:08:25.657 --rc geninfo_all_blocks=1 00:08:25.657 --rc geninfo_unexecuted_blocks=1 00:08:25.657 00:08:25.657 ' 00:08:25.657 23:41:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.657 --rc genhtml_branch_coverage=1 00:08:25.657 --rc genhtml_function_coverage=1 00:08:25.657 --rc genhtml_legend=1 00:08:25.657 --rc geninfo_all_blocks=1 00:08:25.657 --rc geninfo_unexecuted_blocks=1 00:08:25.657 00:08:25.657 ' 00:08:25.657 23:41:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.657 --rc genhtml_branch_coverage=1 00:08:25.657 --rc genhtml_function_coverage=1 00:08:25.657 --rc genhtml_legend=1 00:08:25.657 --rc geninfo_all_blocks=1 00:08:25.657 --rc geninfo_unexecuted_blocks=1 00:08:25.657 00:08:25.657 ' 00:08:25.657 23:41:56 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:25.657 23:41:56 -- bdev/nbd_common.sh@6 -- # set -e 00:08:25.657 23:41:56 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:25.657 23:41:56 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:25.657 23:41:56 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:25.657 23:41:56 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:25.657 23:41:56 -- bdev/blockdev.sh@18 -- # : 00:08:25.657 23:41:56 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:25.657 23:41:56 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:25.657 23:41:56 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:25.657 23:41:56 -- bdev/blockdev.sh@672 -- # uname -s 00:08:25.657 23:41:56 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:25.657 23:41:56 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:25.657 23:41:56 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:25.657 23:41:56 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:25.657 23:41:56 -- bdev/blockdev.sh@682 -- # dek= 00:08:25.657 23:41:56 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:25.657 23:41:56 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:25.657 23:41:56 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:25.657 23:41:56 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:25.657 23:41:56 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:25.657 23:41:56 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:25.657 23:41:56 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61279 00:08:25.657 23:41:56 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.657 23:41:56 -- bdev/blockdev.sh@47 -- # waitforlisten 61279 00:08:25.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.657 23:41:56 -- common/autotest_common.sh@829 -- # '[' -z 61279 ']' 00:08:25.657 23:41:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.657 23:41:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.657 23:41:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.657 23:41:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.658 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.658 23:41:56 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:25.918 [2024-12-13 23:41:56.437532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.919 [2024-12-13 23:41:56.437674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61279 ] 00:08:25.919 [2024-12-13 23:41:56.591160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.180 [2024-12-13 23:41:56.821254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.180 [2024-12-13 23:41:56.821507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.567 23:41:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.567 23:41:57 -- common/autotest_common.sh@862 -- # return 0 00:08:27.567 23:41:57 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:27.567 23:41:57 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:27.567 23:41:57 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:27.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.829 Waiting for block devices as requested 00:08:27.829 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.090 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.090 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.351 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.633 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:33.633 23:42:03 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:33.633 23:42:03 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:33.633 23:42:03 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:33.633 23:42:03 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:33.633 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.633 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:33.633 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:33.633 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:33.633 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.633 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.633 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:33.633 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:33.633 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:33.633 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.634 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:33.634 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:33.634 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.634 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:33.634 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:33.634 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.634 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:33.634 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:33.634 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.634 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:33.634 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:33.634 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:33.634 23:42:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:33.634 23:42:03 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:33.634 23:42:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:33.634 23:42:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:33.634 23:42:03 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:33.634 23:42:03 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:33.634 23:42:03 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:33.634 23:42:03 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:33.634 23:42:03 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:33.634 23:42:03 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:33.634 23:42:03 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:33.634 23:42:03 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:33.634 BYT; 00:08:33.634 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:33.634 23:42:03 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:33.634 BYT; 00:08:33.634 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:33.634 23:42:03 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:33.634 23:42:03 -- bdev/blockdev.sh@114 -- # break 00:08:33.634 23:42:03 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:33.634 23:42:03 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:33.634 23:42:03 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:33.634 23:42:03 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:33.634 23:42:03 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:33.634 23:42:03 -- scripts/common.sh@410 -- # local spdk_guid 00:08:33.634 23:42:03 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.634 23:42:03 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.634 23:42:03 -- scripts/common.sh@415 -- # IFS='()' 00:08:33.634 23:42:03 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:33.634 23:42:03 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.634 23:42:03 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:33.634 23:42:03 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.634 23:42:03 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.634 23:42:03 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.634 23:42:03 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:33.634 23:42:03 -- scripts/common.sh@422 -- # local spdk_guid 00:08:33.634 23:42:03 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.634 23:42:03 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.634 23:42:03 -- scripts/common.sh@427 -- # IFS='()' 00:08:33.634 23:42:03 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:33.634 23:42:03 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.634 23:42:03 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:33.634 23:42:03 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.634 23:42:03 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.634 23:42:03 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.634 23:42:03 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:34.573 The operation has completed successfully. 00:08:34.573 23:42:05 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:35.546 The operation has completed successfully. 00:08:35.546 23:42:06 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.367 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.367 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.367 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.367 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.367 23:42:07 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:36.367 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.367 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.367 [] 00:08:36.367 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.367 23:42:07 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:36.367 23:42:07 -- bdev/blockdev.sh@79 -- # local json 00:08:36.367 23:42:07 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:36.367 23:42:07 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:36.624 23:42:07 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:36.624 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.624 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.882 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.882 23:42:07 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:36.882 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.882 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.882 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.882 23:42:07 -- bdev/blockdev.sh@738 -- # cat 00:08:36.882 23:42:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:36.882 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.882 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.882 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.882 23:42:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:36.882 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.882 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.882 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.882 23:42:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:36.882 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.882 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.882 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.882 23:42:07 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:36.882 23:42:07 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:36.882 23:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.882 23:42:07 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:36.882 23:42:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.882 23:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.882 23:42:07 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:36.882 23:42:07 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:36.883 23:42:07 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ebe97247-3513-438e-b0f9-e4366192dc34"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ebe97247-3513-438e-b0f9-e4366192dc34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "320cd12f-3e56-4835-99c0-3089098cc917"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "320cd12f-3e56-4835-99c0-3089098cc917",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a44c7fa5-a4d3-4686-a4f1-4dd2d6f0648c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a44c7fa5-a4d3-4686-a4f1-4dd2d6f0648c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6174f902-b193-4ca6-9d73-37c1de31116a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6174f902-b193-4ca6-9d73-37c1de31116a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c10c9eef-f1ea-4e12-b61e-624fa5c17da4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c10c9eef-f1ea-4e12-b61e-624fa5c17da4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:36.883 23:42:07 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:36.883 23:42:07 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:36.883 23:42:07 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:36.883 23:42:07 -- bdev/blockdev.sh@752 -- # killprocess 61279 00:08:36.883 23:42:07 -- common/autotest_common.sh@936 -- # '[' -z 61279 ']' 00:08:36.883 23:42:07 -- common/autotest_common.sh@940 -- # kill -0 61279 00:08:36.883 23:42:07 -- common/autotest_common.sh@941 -- # uname 00:08:36.883 23:42:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.883 23:42:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61279 00:08:36.883 23:42:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:36.883 killing process with pid 61279 00:08:36.883 23:42:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:36.883 23:42:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61279' 00:08:36.883 23:42:07 -- common/autotest_common.sh@955 -- # kill 61279 00:08:36.883 23:42:07 -- common/autotest_common.sh@960 -- # wait 61279 00:08:38.254 23:42:08 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:38.254 23:42:08 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:38.254 23:42:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:38.254 23:42:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.254 23:42:08 -- common/autotest_common.sh@10 -- # set +x 00:08:38.254 ************************************ 00:08:38.254 START TEST bdev_hello_world 00:08:38.254 ************************************ 00:08:38.254 23:42:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:38.254 [2024-12-13 23:42:08.786075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.254 [2024-12-13 23:42:08.786188] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61933 ] 00:08:38.254 [2024-12-13 23:42:08.931815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.512 [2024-12-13 23:42:09.073563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.078 [2024-12-13 23:42:09.546448] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:39.078 [2024-12-13 23:42:09.546503] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:39.078 [2024-12-13 23:42:09.546521] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:39.078 [2024-12-13 23:42:09.548881] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:39.078 [2024-12-13 23:42:09.549369] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:39.078 [2024-12-13 23:42:09.549399] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:39.078 [2024-12-13 23:42:09.549553] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:39.078 00:08:39.078 [2024-12-13 23:42:09.549573] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:39.645 00:08:39.645 real 0m1.614s 00:08:39.645 user 0m1.343s 00:08:39.645 sys 0m0.165s 00:08:39.645 23:42:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.645 23:42:10 -- common/autotest_common.sh@10 -- # set +x 00:08:39.645 ************************************ 00:08:39.645 END TEST bdev_hello_world 00:08:39.645 ************************************ 00:08:39.905 23:42:10 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:39.905 23:42:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:39.905 23:42:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.905 23:42:10 -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 ************************************ 00:08:39.905 START TEST bdev_bounds 00:08:39.905 ************************************ 00:08:39.905 23:42:10 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:39.905 23:42:10 -- bdev/blockdev.sh@288 -- # bdevio_pid=61964 00:08:39.905 23:42:10 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.905 Process bdevio pid: 61964 00:08:39.905 23:42:10 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61964' 00:08:39.905 23:42:10 -- bdev/blockdev.sh@291 -- # waitforlisten 61964 00:08:39.905 23:42:10 -- common/autotest_common.sh@829 -- # '[' -z 61964 ']' 00:08:39.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.905 23:42:10 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:39.905 23:42:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.905 23:42:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.905 23:42:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.905 23:42:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.905 23:42:10 -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 [2024-12-13 23:42:10.448220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.905 [2024-12-13 23:42:10.448342] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61964 ] 00:08:39.905 [2024-12-13 23:42:10.599625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.166 [2024-12-13 23:42:10.780718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.166 [2024-12-13 23:42:10.781009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.166 [2024-12-13 23:42:10.781100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.551 23:42:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.551 23:42:11 -- common/autotest_common.sh@862 -- # return 0 00:08:41.551 23:42:11 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:41.551 I/O targets: 00:08:41.551 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:41.551 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:41.551 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:41.551 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.551 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.551 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.551 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:41.551 00:08:41.551 00:08:41.551 CUnit - A unit testing framework for C - Version 2.1-3 00:08:41.551 http://cunit.sourceforge.net/ 00:08:41.551 00:08:41.551 00:08:41.551 Suite: bdevio tests on: Nvme3n1 00:08:41.551 Test: blockdev write read block ...passed 00:08:41.551 Test: blockdev write zeroes read block ...passed 00:08:41.551 Test: blockdev write zeroes read no split ...passed 00:08:41.551 Test: blockdev write zeroes read split ...passed 00:08:41.551 Test: blockdev write zeroes read split partial ...passed 00:08:41.551 Test: blockdev reset ...[2024-12-13 23:42:12.099839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:41.551 [2024-12-13 23:42:12.102417] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.551 passed 00:08:41.551 Test: blockdev write read 8 blocks ...passed 00:08:41.551 Test: blockdev write read size > 128k ...passed 00:08:41.551 Test: blockdev write read invalid size ...passed 00:08:41.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.551 Test: blockdev write read max offset ...passed 00:08:41.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.551 Test: blockdev writev readv 8 blocks ...passed 00:08:41.551 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.551 Test: blockdev writev readv block ...passed 00:08:41.551 Test: blockdev writev readv size > 128k ...passed 00:08:41.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.551 Test: blockdev comparev and writev ...[2024-12-13 23:42:12.110058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27820a000 len:0x1000 00:08:41.551 [2024-12-13 23:42:12.110105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.551 passed 00:08:41.551 Test: blockdev nvme passthru rw ...passed 00:08:41.551 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.551 Test: blockdev nvme admin passthru ...[2024-12-13 23:42:12.111088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.551 [2024-12-13 23:42:12.111118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.551 passed 00:08:41.551 Test: blockdev copy ...passed 00:08:41.551 Suite: bdevio tests on: Nvme2n3 00:08:41.551 Test: blockdev write read block ...passed 00:08:41.551 Test: blockdev write zeroes read block ...passed 00:08:41.551 Test: blockdev write zeroes read no split ...passed 00:08:41.551 Test: blockdev write zeroes read split ...passed 00:08:41.551 Test: blockdev write zeroes read split partial ...passed 00:08:41.551 Test: blockdev reset ...[2024-12-13 23:42:12.167578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:41.551 passed 00:08:41.551 Test: blockdev write read 8 blocks ...[2024-12-13 23:42:12.170959] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.551 passed 00:08:41.551 Test: blockdev write read size > 128k ...passed 00:08:41.551 Test: blockdev write read invalid size ...passed 00:08:41.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.551 Test: blockdev write read max offset ...passed 00:08:41.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.551 Test: blockdev writev readv 8 blocks ...passed 00:08:41.551 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.551 Test: blockdev writev readv block ...passed 00:08:41.551 Test: blockdev writev readv size > 128k ...passed 00:08:41.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.551 Test: blockdev comparev and writev ...[2024-12-13 23:42:12.188844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273104000 len:0x1000 00:08:41.551 [2024-12-13 23:42:12.188885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.551 passed 00:08:41.551 Test: blockdev nvme passthru rw ...passed 00:08:41.551 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:42:12.191516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.551 [2024-12-13 23:42:12.191542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.551 passed 00:08:41.551 Test: blockdev nvme admin passthru ...passed 00:08:41.551 Test: blockdev copy ...passed 00:08:41.551 Suite: bdevio tests on: Nvme2n2 00:08:41.551 Test: blockdev write read block ...passed 00:08:41.551 Test: blockdev write zeroes read block ...passed 00:08:41.551 Test: blockdev write zeroes read no split ...passed 00:08:41.551 Test: blockdev write zeroes read split ...passed 00:08:41.551 Test: blockdev write zeroes read split partial ...passed 00:08:41.551 Test: blockdev reset ...[2024-12-13 23:42:12.250711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:41.551 [2024-12-13 23:42:12.253938] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.551 passed 00:08:41.551 Test: blockdev write read 8 blocks ...passed 00:08:41.551 Test: blockdev write read size > 128k ...passed 00:08:41.551 Test: blockdev write read invalid size ...passed 00:08:41.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.551 Test: blockdev write read max offset ...passed 00:08:41.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.551 Test: blockdev writev readv 8 blocks ...passed 00:08:41.551 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.551 Test: blockdev writev readv block ...passed 00:08:41.551 Test: blockdev writev readv size > 128k ...passed 00:08:41.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.551 Test: blockdev comparev and writev ...[2024-12-13 23:42:12.271669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273104000 len:0x1000 00:08:41.551 [2024-12-13 23:42:12.271710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.551 passed 00:08:41.551 Test: blockdev nvme passthru rw ...passed 00:08:41.552 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:42:12.274221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.552 [2024-12-13 23:42:12.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.552 passed 00:08:41.813 Test: blockdev nvme admin passthru ...passed 00:08:41.813 Test: blockdev copy ...passed 00:08:41.813 Suite: bdevio tests on: Nvme2n1 00:08:41.813 Test: blockdev write read block ...passed 00:08:41.813 Test: blockdev write zeroes read block ...passed 00:08:41.813 Test: blockdev write zeroes read no split ...passed 00:08:41.813 Test: blockdev write zeroes read split ...passed 00:08:41.813 Test: blockdev write zeroes read split partial ...passed 00:08:41.813 Test: blockdev reset ...[2024-12-13 23:42:12.335613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:41.813 passed 00:08:41.813 Test: blockdev write read 8 blocks ...[2024-12-13 23:42:12.338880] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.813 passed 00:08:41.813 Test: blockdev write read size > 128k ...passed 00:08:41.813 Test: blockdev write read invalid size ...passed 00:08:41.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.813 Test: blockdev write read max offset ...passed 00:08:41.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.813 Test: blockdev writev readv 8 blocks ...passed 00:08:41.813 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.813 Test: blockdev writev readv block ...passed 00:08:41.813 Test: blockdev writev readv size > 128k ...passed 00:08:41.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.813 Test: blockdev comparev and writev ...[2024-12-13 23:42:12.356744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280c3c000 len:0x1000 00:08:41.813 [2024-12-13 23:42:12.356792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.813 passed 00:08:41.813 Test: blockdev nvme passthru rw ...passed 00:08:41.813 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:42:12.359864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.813 [2024-12-13 23:42:12.359901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.813 passed 00:08:41.813 Test: blockdev nvme admin passthru ...passed 00:08:41.813 Test: blockdev copy ...passed 00:08:41.813 Suite: bdevio tests on: Nvme1n1 00:08:41.813 Test: blockdev write read block ...passed 00:08:41.813 Test: blockdev write zeroes read block ...passed 00:08:41.813 Test: blockdev write zeroes read no split ...passed 00:08:41.813 Test: blockdev write zeroes read split ...passed 00:08:41.813 Test: blockdev write zeroes read split partial ...passed 00:08:41.813 Test: blockdev reset ...[2024-12-13 23:42:12.425209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:41.813 [2024-12-13 23:42:12.429137] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.813 passed 00:08:41.813 Test: blockdev write read 8 blocks ...passed 00:08:41.813 Test: blockdev write read size > 128k ...passed 00:08:41.813 Test: blockdev write read invalid size ...passed 00:08:41.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.813 Test: blockdev write read max offset ...passed 00:08:41.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.813 Test: blockdev writev readv 8 blocks ...passed 00:08:41.813 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.813 Test: blockdev writev readv block ...passed 00:08:41.813 Test: blockdev writev readv size > 128k ...passed 00:08:41.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.813 Test: blockdev comparev and writev ...[2024-12-13 23:42:12.448575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280c38000 len:0x1000 00:08:41.813 [2024-12-13 23:42:12.448627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.813 passed 00:08:41.813 Test: blockdev nvme passthru rw ...passed 00:08:41.813 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.813 Test: blockdev nvme admin passthru ...[2024-12-13 23:42:12.451007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.813 [2024-12-13 23:42:12.451051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.813 passed 00:08:41.813 Test: blockdev copy ...passed 00:08:41.813 Suite: bdevio tests on: Nvme0n1p2 00:08:41.813 Test: blockdev write read block ...passed 00:08:41.813 Test: blockdev write zeroes read block ...passed 00:08:41.813 Test: blockdev write zeroes read no split ...passed 00:08:41.813 Test: blockdev write zeroes read split ...passed 00:08:41.813 Test: blockdev write zeroes read split partial ...passed 00:08:41.813 Test: blockdev reset ...[2024-12-13 23:42:12.517189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:41.813 [2024-12-13 23:42:12.520332] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.813 passed 00:08:41.813 Test: blockdev write read 8 blocks ...passed 00:08:41.813 Test: blockdev write read size > 128k ...passed 00:08:41.813 Test: blockdev write read invalid size ...passed 00:08:41.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.813 Test: blockdev write read max offset ...passed 00:08:41.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.813 Test: blockdev writev readv 8 blocks ...passed 00:08:41.813 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.813 Test: blockdev writev readv block ...passed 00:08:41.813 Test: blockdev writev readv size > 128k ...passed 00:08:41.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.813 Test: blockdev comparev and writev ...passed 00:08:41.813 Test: blockdev nvme passthru rw ...passed 00:08:41.813 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.813 Test: blockdev nvme admin passthru ...passed 00:08:41.813 Test: blockdev copy ...[2024-12-13 23:42:12.539406] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:41.813 separate metadata which is not supported yet. 00:08:42.075 passed 00:08:42.075 Suite: bdevio tests on: Nvme0n1p1 00:08:42.075 Test: blockdev write read block ...passed 00:08:42.075 Test: blockdev write zeroes read block ...passed 00:08:42.075 Test: blockdev write zeroes read no split ...passed 00:08:42.075 Test: blockdev write zeroes read split ...passed 00:08:42.075 Test: blockdev write zeroes read split partial ...passed 00:08:42.075 Test: blockdev reset ...[2024-12-13 23:42:12.600830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:42.075 [2024-12-13 23:42:12.605199] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.075 passed 00:08:42.075 Test: blockdev write read 8 blocks ...passed 00:08:42.075 Test: blockdev write read size > 128k ...passed 00:08:42.075 Test: blockdev write read invalid size ...passed 00:08:42.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.075 Test: blockdev write read max offset ...passed 00:08:42.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.075 Test: blockdev writev readv 8 blocks ...passed 00:08:42.075 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.075 Test: blockdev writev readv block ...passed 00:08:42.075 Test: blockdev writev readv size > 128k ...passed 00:08:42.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.075 Test: blockdev comparev and writev ...passed 00:08:42.075 Test: blockdev nvme passthru rw ...passed 00:08:42.075 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.075 Test: blockdev nvme admin passthru ...passed 00:08:42.076 Test: blockdev copy ...[2024-12-13 23:42:12.622657] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:42.076 separate metadata which is not supported yet. 00:08:42.076 passed 00:08:42.076 00:08:42.076 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.076 suites 7 7 n/a 0 0 00:08:42.076 tests 161 161 161 0 0 00:08:42.076 asserts 1006 1006 1006 0 n/a 00:08:42.076 00:08:42.076 Elapsed time = 1.472 seconds 00:08:42.076 0 00:08:42.076 23:42:12 -- bdev/blockdev.sh@293 -- # killprocess 61964 00:08:42.076 23:42:12 -- common/autotest_common.sh@936 -- # '[' -z 61964 ']' 00:08:42.076 23:42:12 -- common/autotest_common.sh@940 -- # kill -0 61964 00:08:42.076 23:42:12 -- common/autotest_common.sh@941 -- # uname 00:08:42.076 23:42:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.076 23:42:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61964 00:08:42.076 23:42:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.076 23:42:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.076 killing process with pid 61964 00:08:42.076 23:42:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61964' 00:08:42.076 23:42:12 -- common/autotest_common.sh@955 -- # kill 61964 00:08:42.076 23:42:12 -- common/autotest_common.sh@960 -- # wait 61964 00:08:43.019 23:42:13 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:43.019 00:08:43.019 real 0m3.026s 00:08:43.019 user 0m7.853s 00:08:43.019 sys 0m0.320s 00:08:43.019 23:42:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.019 23:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.019 ************************************ 00:08:43.019 END TEST bdev_bounds 00:08:43.019 ************************************ 00:08:43.019 23:42:13 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:43.019 23:42:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:43.019 23:42:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.019 23:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.019 ************************************ 00:08:43.019 START TEST bdev_nbd 00:08:43.019 ************************************ 00:08:43.019 23:42:13 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:43.019 23:42:13 -- bdev/blockdev.sh@298 -- # uname -s 00:08:43.019 23:42:13 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:43.019 23:42:13 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.019 23:42:13 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:43.019 23:42:13 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.019 23:42:13 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:43.019 23:42:13 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:43.019 23:42:13 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:43.019 23:42:13 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:43.019 23:42:13 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:43.019 23:42:13 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:43.019 23:42:13 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:43.019 23:42:13 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:43.019 23:42:13 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.019 23:42:13 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:43.019 23:42:13 -- bdev/blockdev.sh@316 -- # nbd_pid=62031 00:08:43.019 23:42:13 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:43.019 23:42:13 -- bdev/blockdev.sh@318 -- # waitforlisten 62031 /var/tmp/spdk-nbd.sock 00:08:43.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:43.019 23:42:13 -- common/autotest_common.sh@829 -- # '[' -z 62031 ']' 00:08:43.019 23:42:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:43.019 23:42:13 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:43.019 23:42:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.019 23:42:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:43.019 23:42:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.019 23:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.019 [2024-12-13 23:42:13.524420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.019 [2024-12-13 23:42:13.524640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.019 [2024-12-13 23:42:13.671041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.280 [2024-12-13 23:42:13.859648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.662 23:42:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.662 23:42:15 -- common/autotest_common.sh@862 -- # return 0 00:08:44.662 23:42:15 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@24 -- # local i 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:44.662 23:42:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:44.662 23:42:15 -- common/autotest_common.sh@867 -- # local i 00:08:44.662 23:42:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:44.662 23:42:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:44.662 23:42:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:44.662 23:42:15 -- common/autotest_common.sh@871 -- # break 00:08:44.662 23:42:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:44.662 23:42:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:44.662 23:42:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.662 1+0 records in 00:08:44.662 1+0 records out 00:08:44.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927795 s, 4.4 MB/s 00:08:44.662 23:42:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.662 23:42:15 -- common/autotest_common.sh@884 -- # size=4096 00:08:44.662 23:42:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.662 23:42:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:44.662 23:42:15 -- common/autotest_common.sh@887 -- # return 0 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.662 23:42:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:44.923 23:42:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:44.923 23:42:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:44.923 23:42:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:44.923 23:42:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:44.923 23:42:15 -- common/autotest_common.sh@867 -- # local i 00:08:44.923 23:42:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:44.923 23:42:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:44.923 23:42:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:44.923 23:42:15 -- common/autotest_common.sh@871 -- # break 00:08:44.923 23:42:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:44.923 23:42:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:44.923 23:42:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.923 1+0 records in 00:08:44.923 1+0 records out 00:08:44.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000916647 s, 4.5 MB/s 00:08:44.923 23:42:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.923 23:42:15 -- common/autotest_common.sh@884 -- # size=4096 00:08:44.923 23:42:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.923 23:42:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:44.923 23:42:15 -- common/autotest_common.sh@887 -- # return 0 00:08:44.923 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.923 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.923 23:42:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:45.185 23:42:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:45.185 23:42:15 -- common/autotest_common.sh@867 -- # local i 00:08:45.185 23:42:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.185 23:42:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.185 23:42:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:45.185 23:42:15 -- common/autotest_common.sh@871 -- # break 00:08:45.185 23:42:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.185 23:42:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.185 23:42:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.185 1+0 records in 00:08:45.185 1+0 records out 00:08:45.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995658 s, 4.1 MB/s 00:08:45.185 23:42:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.185 23:42:15 -- common/autotest_common.sh@884 -- # size=4096 00:08:45.185 23:42:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.185 23:42:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.185 23:42:15 -- common/autotest_common.sh@887 -- # return 0 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:45.185 23:42:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:45.444 23:42:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:45.444 23:42:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:45.444 23:42:15 -- common/autotest_common.sh@867 -- # local i 00:08:45.444 23:42:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.444 23:42:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.444 23:42:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:45.444 23:42:15 -- common/autotest_common.sh@871 -- # break 00:08:45.444 23:42:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.444 23:42:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.444 23:42:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.444 1+0 records in 00:08:45.444 1+0 records out 00:08:45.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076909 s, 5.3 MB/s 00:08:45.444 23:42:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.444 23:42:15 -- common/autotest_common.sh@884 -- # size=4096 00:08:45.444 23:42:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.444 23:42:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.444 23:42:15 -- common/autotest_common.sh@887 -- # return 0 00:08:45.444 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.444 23:42:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.444 23:42:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:45.444 23:42:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:45.444 23:42:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:45.444 23:42:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:45.444 23:42:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:45.444 23:42:16 -- common/autotest_common.sh@867 -- # local i 00:08:45.444 23:42:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.444 23:42:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.444 23:42:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:45.444 23:42:16 -- common/autotest_common.sh@871 -- # break 00:08:45.444 23:42:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.444 23:42:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.444 23:42:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.444 1+0 records in 00:08:45.444 1+0 records out 00:08:45.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971853 s, 4.2 MB/s 00:08:45.444 23:42:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.444 23:42:16 -- common/autotest_common.sh@884 -- # size=4096 00:08:45.444 23:42:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.444 23:42:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.444 23:42:16 -- common/autotest_common.sh@887 -- # return 0 00:08:45.444 23:42:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.444 23:42:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.444 23:42:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:45.703 23:42:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:45.703 23:42:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:45.703 23:42:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:45.703 23:42:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:45.703 23:42:16 -- common/autotest_common.sh@867 -- # local i 00:08:45.703 23:42:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.703 23:42:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.703 23:42:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:45.703 23:42:16 -- common/autotest_common.sh@871 -- # break 00:08:45.703 23:42:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.703 23:42:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.703 23:42:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.703 1+0 records in 00:08:45.703 1+0 records out 00:08:45.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829211 s, 4.9 MB/s 00:08:45.703 23:42:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.703 23:42:16 -- common/autotest_common.sh@884 -- # size=4096 00:08:45.703 23:42:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.703 23:42:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.703 23:42:16 -- common/autotest_common.sh@887 -- # return 0 00:08:45.703 23:42:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.703 23:42:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.703 23:42:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:45.962 23:42:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:45.962 23:42:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:45.962 23:42:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:45.962 23:42:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:45.962 23:42:16 -- common/autotest_common.sh@867 -- # local i 00:08:45.962 23:42:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.962 23:42:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.962 23:42:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:45.962 23:42:16 -- common/autotest_common.sh@871 -- # break 00:08:45.962 23:42:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.962 23:42:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.962 23:42:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.962 1+0 records in 00:08:45.962 1+0 records out 00:08:45.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798787 s, 5.1 MB/s 00:08:45.962 23:42:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.962 23:42:16 -- common/autotest_common.sh@884 -- # size=4096 00:08:45.962 23:42:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.962 23:42:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.963 23:42:16 -- common/autotest_common.sh@887 -- # return 0 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd0", 00:08:45.963 "bdev_name": "Nvme0n1p1" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd1", 00:08:45.963 "bdev_name": "Nvme0n1p2" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd2", 00:08:45.963 "bdev_name": "Nvme1n1" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd3", 00:08:45.963 "bdev_name": "Nvme2n1" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd4", 00:08:45.963 "bdev_name": "Nvme2n2" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd5", 00:08:45.963 "bdev_name": "Nvme2n3" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd6", 00:08:45.963 "bdev_name": "Nvme3n1" 00:08:45.963 } 00:08:45.963 ]' 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:45.963 23:42:16 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd0", 00:08:45.963 "bdev_name": "Nvme0n1p1" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd1", 00:08:45.963 "bdev_name": "Nvme0n1p2" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd2", 00:08:45.963 "bdev_name": "Nvme1n1" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd3", 00:08:45.963 "bdev_name": "Nvme2n1" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd4", 00:08:45.963 "bdev_name": "Nvme2n2" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd5", 00:08:45.963 "bdev_name": "Nvme2n3" 00:08:45.963 }, 00:08:45.963 { 00:08:45.963 "nbd_device": "/dev/nbd6", 00:08:45.963 "bdev_name": "Nvme3n1" 00:08:45.963 } 00:08:45.963 ]' 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@51 -- # local i 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@41 -- # break 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.223 23:42:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@41 -- # break 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.484 23:42:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@41 -- # break 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.745 23:42:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@41 -- # break 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@41 -- # break 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.004 23:42:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@41 -- # break 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.263 23:42:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@41 -- # break 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:47.521 23:42:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@65 -- # true 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@65 -- # count=0 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@122 -- # count=0 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@127 -- # return 0 00:08:47.522 23:42:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@12 -- # local i 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.522 23:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:47.781 /dev/nbd0 00:08:47.781 23:42:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.781 23:42:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.781 23:42:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:47.781 23:42:18 -- common/autotest_common.sh@867 -- # local i 00:08:47.781 23:42:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.781 23:42:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.781 23:42:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:47.781 23:42:18 -- common/autotest_common.sh@871 -- # break 00:08:47.781 23:42:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.781 23:42:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.781 23:42:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.781 1+0 records in 00:08:47.781 1+0 records out 00:08:47.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668759 s, 6.1 MB/s 00:08:47.781 23:42:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.781 23:42:18 -- common/autotest_common.sh@884 -- # size=4096 00:08:47.781 23:42:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.781 23:42:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.781 23:42:18 -- common/autotest_common.sh@887 -- # return 0 00:08:47.781 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.781 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.781 23:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:48.042 /dev/nbd1 00:08:48.042 23:42:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:48.042 23:42:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:48.042 23:42:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:48.042 23:42:18 -- common/autotest_common.sh@867 -- # local i 00:08:48.042 23:42:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.042 23:42:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.042 23:42:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:48.042 23:42:18 -- common/autotest_common.sh@871 -- # break 00:08:48.042 23:42:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.042 23:42:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.042 23:42:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.042 1+0 records in 00:08:48.042 1+0 records out 00:08:48.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512024 s, 8.0 MB/s 00:08:48.042 23:42:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.042 23:42:18 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.042 23:42:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.042 23:42:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.042 23:42:18 -- common/autotest_common.sh@887 -- # return 0 00:08:48.042 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.042 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.042 23:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:48.302 /dev/nbd10 00:08:48.302 23:42:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:48.302 23:42:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:48.302 23:42:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:48.302 23:42:18 -- common/autotest_common.sh@867 -- # local i 00:08:48.302 23:42:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.302 23:42:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.302 23:42:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:48.302 23:42:18 -- common/autotest_common.sh@871 -- # break 00:08:48.302 23:42:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.302 23:42:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.302 23:42:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.302 1+0 records in 00:08:48.302 1+0 records out 00:08:48.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867333 s, 4.7 MB/s 00:08:48.302 23:42:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.302 23:42:18 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.302 23:42:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.302 23:42:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.302 23:42:18 -- common/autotest_common.sh@887 -- # return 0 00:08:48.302 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.302 23:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.302 23:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:48.562 /dev/nbd11 00:08:48.562 23:42:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:48.562 23:42:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:48.562 23:42:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:48.562 23:42:19 -- common/autotest_common.sh@867 -- # local i 00:08:48.562 23:42:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.562 23:42:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.562 23:42:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:48.562 23:42:19 -- common/autotest_common.sh@871 -- # break 00:08:48.562 23:42:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.562 23:42:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.562 23:42:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.562 1+0 records in 00:08:48.562 1+0 records out 00:08:48.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815459 s, 5.0 MB/s 00:08:48.562 23:42:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.562 23:42:19 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.562 23:42:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.562 23:42:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.562 23:42:19 -- common/autotest_common.sh@887 -- # return 0 00:08:48.562 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.562 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.562 23:42:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:48.822 /dev/nbd12 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:48.822 23:42:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:48.822 23:42:19 -- common/autotest_common.sh@867 -- # local i 00:08:48.822 23:42:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:48.822 23:42:19 -- common/autotest_common.sh@871 -- # break 00:08:48.822 23:42:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.822 1+0 records in 00:08:48.822 1+0 records out 00:08:48.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688825 s, 5.9 MB/s 00:08:48.822 23:42:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.822 23:42:19 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.822 23:42:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.822 23:42:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.822 23:42:19 -- common/autotest_common.sh@887 -- # return 0 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:48.822 /dev/nbd13 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:48.822 23:42:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:48.822 23:42:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:48.822 23:42:19 -- common/autotest_common.sh@867 -- # local i 00:08:48.822 23:42:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:48.822 23:42:19 -- common/autotest_common.sh@871 -- # break 00:08:48.822 23:42:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.822 23:42:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.822 1+0 records in 00:08:48.822 1+0 records out 00:08:48.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132395 s, 3.1 MB/s 00:08:49.083 23:42:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.083 23:42:19 -- common/autotest_common.sh@884 -- # size=4096 00:08:49.083 23:42:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.083 23:42:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.083 23:42:19 -- common/autotest_common.sh@887 -- # return 0 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:49.083 /dev/nbd14 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:49.083 23:42:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:49.083 23:42:19 -- common/autotest_common.sh@867 -- # local i 00:08:49.083 23:42:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.083 23:42:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.083 23:42:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:49.083 23:42:19 -- common/autotest_common.sh@871 -- # break 00:08:49.083 23:42:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.083 23:42:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.083 23:42:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.083 1+0 records in 00:08:49.083 1+0 records out 00:08:49.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008092 s, 5.1 MB/s 00:08:49.083 23:42:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.083 23:42:19 -- common/autotest_common.sh@884 -- # size=4096 00:08:49.083 23:42:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.083 23:42:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.083 23:42:19 -- common/autotest_common.sh@887 -- # return 0 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.083 23:42:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.342 23:42:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd0", 00:08:49.342 "bdev_name": "Nvme0n1p1" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd1", 00:08:49.342 "bdev_name": "Nvme0n1p2" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd10", 00:08:49.342 "bdev_name": "Nvme1n1" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd11", 00:08:49.342 "bdev_name": "Nvme2n1" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd12", 00:08:49.342 "bdev_name": "Nvme2n2" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd13", 00:08:49.342 "bdev_name": "Nvme2n3" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd14", 00:08:49.342 "bdev_name": "Nvme3n1" 00:08:49.342 } 00:08:49.342 ]' 00:08:49.342 23:42:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd0", 00:08:49.342 "bdev_name": "Nvme0n1p1" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd1", 00:08:49.342 "bdev_name": "Nvme0n1p2" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd10", 00:08:49.342 "bdev_name": "Nvme1n1" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd11", 00:08:49.342 "bdev_name": "Nvme2n1" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd12", 00:08:49.342 "bdev_name": "Nvme2n2" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd13", 00:08:49.342 "bdev_name": "Nvme2n3" 00:08:49.342 }, 00:08:49.342 { 00:08:49.342 "nbd_device": "/dev/nbd14", 00:08:49.342 "bdev_name": "Nvme3n1" 00:08:49.342 } 00:08:49.342 ]' 00:08:49.342 23:42:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:49.342 /dev/nbd1 00:08:49.342 /dev/nbd10 00:08:49.342 /dev/nbd11 00:08:49.342 /dev/nbd12 00:08:49.342 /dev/nbd13 00:08:49.342 /dev/nbd14' 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:49.342 /dev/nbd1 00:08:49.342 /dev/nbd10 00:08:49.342 /dev/nbd11 00:08:49.342 /dev/nbd12 00:08:49.342 /dev/nbd13 00:08:49.342 /dev/nbd14' 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@65 -- # count=7 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@95 -- # count=7 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:49.342 256+0 records in 00:08:49.342 256+0 records out 00:08:49.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739903 s, 142 MB/s 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.342 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:49.603 256+0 records in 00:08:49.603 256+0 records out 00:08:49.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137595 s, 7.6 MB/s 00:08:49.603 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.603 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:49.603 256+0 records in 00:08:49.603 256+0 records out 00:08:49.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12768 s, 8.2 MB/s 00:08:49.603 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.603 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:49.864 256+0 records in 00:08:49.864 256+0 records out 00:08:49.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126263 s, 8.3 MB/s 00:08:49.864 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.864 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:49.864 256+0 records in 00:08:49.864 256+0 records out 00:08:49.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0975421 s, 10.7 MB/s 00:08:49.864 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.864 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:50.126 256+0 records in 00:08:50.126 256+0 records out 00:08:50.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127877 s, 8.2 MB/s 00:08:50.126 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.126 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:50.126 256+0 records in 00:08:50.126 256+0 records out 00:08:50.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0982218 s, 10.7 MB/s 00:08:50.126 23:42:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.126 23:42:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:50.387 256+0 records in 00:08:50.387 256+0 records out 00:08:50.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0994695 s, 10.5 MB/s 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@51 -- # local i 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.387 23:42:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@41 -- # break 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@41 -- # break 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.648 23:42:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@41 -- # break 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.909 23:42:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@41 -- # break 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.192 23:42:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:51.452 23:42:21 -- bdev/nbd_common.sh@41 -- # break 00:08:51.452 23:42:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.452 23:42:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.452 23:42:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@41 -- # break 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.453 23:42:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@41 -- # break 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.712 23:42:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@65 -- # true 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.973 23:42:22 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:51.973 23:42:22 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:52.232 malloc_lvol_verify 00:08:52.232 23:42:22 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:52.232 50172a5c-9065-4366-839a-e4c560862ca7 00:08:52.232 23:42:22 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:52.491 c1668b79-5056-4725-a011-5f6135e8d1a1 00:08:52.491 23:42:23 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:52.749 /dev/nbd0 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:52.749 mke2fs 1.47.0 (5-Feb-2023) 00:08:52.749 Discarding device blocks: 0/4096 done 00:08:52.749 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:52.749 00:08:52.749 Allocating group tables: 0/1 done 00:08:52.749 Writing inode tables: 0/1 done 00:08:52.749 Creating journal (1024 blocks): done 00:08:52.749 Writing superblocks and filesystem accounting information: 0/1 done 00:08:52.749 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@51 -- # local i 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.749 23:42:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@41 -- # break 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:53.009 23:42:23 -- bdev/nbd_common.sh@147 -- # return 0 00:08:53.009 23:42:23 -- bdev/blockdev.sh@324 -- # killprocess 62031 00:08:53.009 23:42:23 -- common/autotest_common.sh@936 -- # '[' -z 62031 ']' 00:08:53.009 23:42:23 -- common/autotest_common.sh@940 -- # kill -0 62031 00:08:53.009 23:42:23 -- common/autotest_common.sh@941 -- # uname 00:08:53.009 23:42:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:53.009 23:42:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62031 00:08:53.009 killing process with pid 62031 00:08:53.009 23:42:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:53.009 23:42:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:53.009 23:42:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62031' 00:08:53.009 23:42:23 -- common/autotest_common.sh@955 -- # kill 62031 00:08:53.009 23:42:23 -- common/autotest_common.sh@960 -- # wait 62031 00:08:53.950 ************************************ 00:08:53.951 END TEST bdev_nbd 00:08:53.951 ************************************ 00:08:53.951 23:42:24 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:53.951 00:08:53.951 real 0m10.963s 00:08:53.951 user 0m15.122s 00:08:53.951 sys 0m3.389s 00:08:53.951 23:42:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.951 23:42:24 -- common/autotest_common.sh@10 -- # set +x 00:08:53.951 23:42:24 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:53.951 23:42:24 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:53.951 skipping fio tests on NVMe due to multi-ns failures. 00:08:53.951 23:42:24 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:53.951 23:42:24 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:53.951 23:42:24 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:53.951 23:42:24 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.951 23:42:24 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:53.951 23:42:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.951 23:42:24 -- common/autotest_common.sh@10 -- # set +x 00:08:53.951 ************************************ 00:08:53.951 START TEST bdev_verify 00:08:53.951 ************************************ 00:08:53.951 23:42:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.951 [2024-12-13 23:42:24.557052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.951 [2024-12-13 23:42:24.557594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62448 ] 00:08:54.210 [2024-12-13 23:42:24.706196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.210 [2024-12-13 23:42:24.861354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.210 [2024-12-13 23:42:24.861374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.777 Running I/O for 5 seconds... 00:09:00.047 00:09:00.047 Latency(us) 00:09:00.047 [2024-12-13T23:42:30.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.047 [2024-12-13T23:42:30.779Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.047 Verification LBA range: start 0x0 length 0x5e800 00:09:00.047 Nvme0n1p1 : 5.05 2603.83 10.17 0.00 0.00 49022.54 6604.01 65334.35 00:09:00.047 [2024-12-13T23:42:30.779Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.047 Verification LBA range: start 0x5e800 length 0x5e800 00:09:00.047 Nvme0n1p1 : 5.05 2573.66 10.05 0.00 0.00 49456.00 4032.98 57268.38 00:09:00.047 [2024-12-13T23:42:30.779Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.047 Verification LBA range: start 0x0 length 0x5e7ff 00:09:00.047 Nvme0n1p2 : 5.05 2602.55 10.17 0.00 0.00 49016.08 7612.26 63317.86 00:09:00.047 [2024-12-13T23:42:30.779Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.047 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:00.047 Nvme0n1p2 : 5.06 2572.05 10.05 0.00 0.00 49425.20 6553.60 58478.28 00:09:00.047 [2024-12-13T23:42:30.780Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x0 length 0xa0000 00:09:00.048 Nvme1n1 : 5.05 2601.95 10.16 0.00 0.00 48978.87 7259.37 51218.90 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0xa0000 length 0xa0000 00:09:00.048 Nvme1n1 : 5.06 2570.34 10.04 0.00 0.00 49389.87 9376.69 53638.70 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x0 length 0x80000 00:09:00.048 Nvme2n1 : 5.05 2601.29 10.16 0.00 0.00 48903.48 7914.73 49202.41 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x80000 length 0x80000 00:09:00.048 Nvme2n1 : 5.06 2568.65 10.03 0.00 0.00 49359.37 10637.00 52025.50 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x0 length 0x80000 00:09:00.048 Nvme2n2 : 5.05 2608.17 10.19 0.00 0.00 48776.99 1487.16 50412.31 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x80000 length 0x80000 00:09:00.048 Nvme2n2 : 5.05 2570.91 10.04 0.00 0.00 49672.31 5116.85 54445.29 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x0 length 0x80000 00:09:00.048 Nvme2n3 : 5.06 2606.50 10.18 0.00 0.00 48741.61 4058.19 49000.76 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x80000 length 0x80000 00:09:00.048 Nvme2n3 : 5.05 2574.75 10.06 0.00 0.00 49536.59 3062.55 52025.50 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x0 length 0x20000 00:09:00.048 Nvme3n1 : 5.06 2604.71 10.17 0.00 0.00 48718.68 6906.49 51017.26 00:09:00.048 [2024-12-13T23:42:30.780Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:00.048 Verification LBA range: start 0x20000 length 0x20000 00:09:00.048 Nvme3n1 : 5.05 2574.25 10.06 0.00 0.00 49492.02 3579.27 52428.80 00:09:00.048 [2024-12-13T23:42:30.780Z] =================================================================================================================== 00:09:00.048 [2024-12-13T23:42:30.780Z] Total : 36233.62 141.54 0.00 0.00 49175.94 1487.16 65334.35 00:09:02.579 00:09:02.579 real 0m8.275s 00:09:02.579 user 0m15.304s 00:09:02.580 sys 0m0.226s 00:09:02.580 23:42:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.580 ************************************ 00:09:02.580 END TEST bdev_verify 00:09:02.580 ************************************ 00:09:02.580 23:42:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.580 23:42:32 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.580 23:42:32 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:02.580 23:42:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.580 23:42:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.580 ************************************ 00:09:02.580 START TEST bdev_verify_big_io 00:09:02.580 ************************************ 00:09:02.580 23:42:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.580 [2024-12-13 23:42:32.920434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.580 [2024-12-13 23:42:32.920591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62552 ] 00:09:02.580 [2024-12-13 23:42:33.072469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.580 [2024-12-13 23:42:33.300408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.580 [2024-12-13 23:42:33.300514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.514 Running I/O for 5 seconds... 00:09:10.110 00:09:10.110 Latency(us) 00:09:10.110 [2024-12-13T23:42:40.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0x5e80 00:09:10.110 Nvme0n1p1 : 5.41 224.25 14.02 0.00 0.00 561651.24 48395.82 822728.86 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x5e80 length 0x5e80 00:09:10.110 Nvme0n1p1 : 5.45 162.27 10.14 0.00 0.00 769342.85 82272.89 1219574.55 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0x5e7f 00:09:10.110 Nvme0n1p2 : 5.42 224.18 14.01 0.00 0.00 553992.73 49000.76 754974.72 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:10.110 Nvme0n1p2 : 5.46 162.22 10.14 0.00 0.00 749874.25 82676.18 1122782.92 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0xa000 00:09:10.110 Nvme1n1 : 5.42 224.07 14.00 0.00 0.00 546423.90 50210.66 696899.74 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0xa000 length 0xa000 00:09:10.110 Nvme1n1 : 5.50 167.17 10.45 0.00 0.00 709362.50 39926.55 1006632.96 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0x8000 00:09:10.110 Nvme2n1 : 5.42 224.00 14.00 0.00 0.00 538791.71 50815.61 642051.15 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x8000 length 0x8000 00:09:10.110 Nvme2n1 : 5.54 182.79 11.42 0.00 0.00 636398.99 16232.76 890483.00 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0x8000 00:09:10.110 Nvme2n2 : 5.47 229.08 14.32 0.00 0.00 518575.52 47992.52 583976.17 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x8000 length 0x8000 00:09:10.110 Nvme2n2 : 5.60 209.66 13.10 0.00 0.00 543177.23 8217.21 784012.21 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0x8000 00:09:10.110 Nvme2n3 : 5.48 246.20 15.39 0.00 0.00 480094.43 4385.87 522674.81 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x8000 length 0x8000 00:09:10.110 Nvme2n3 : 5.70 274.95 17.18 0.00 0.00 405478.21 6956.90 1129235.69 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x0 length 0x2000 00:09:10.110 Nvme3n1 : 5.48 246.12 15.38 0.00 0.00 473212.70 4562.31 561391.46 00:09:10.110 [2024-12-13T23:42:40.842Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:10.110 Verification LBA range: start 0x2000 length 0x2000 00:09:10.110 Nvme3n1 : 5.80 396.11 24.76 0.00 0.00 276451.29 359.19 871124.68 00:09:10.110 [2024-12-13T23:42:40.842Z] =================================================================================================================== 00:09:10.110 [2024-12-13T23:42:40.843Z] Total : 3173.05 198.32 0.00 0.00 521852.66 359.19 1219574.55 00:09:10.679 00:09:10.679 real 0m8.483s 00:09:10.679 user 0m15.268s 00:09:10.679 sys 0m0.267s 00:09:10.679 23:42:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.679 ************************************ 00:09:10.679 END TEST bdev_verify_big_io 00:09:10.679 ************************************ 00:09:10.679 23:42:41 -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 23:42:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.679 23:42:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:10.679 23:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.679 23:42:41 -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 ************************************ 00:09:10.679 START TEST bdev_write_zeroes 00:09:10.679 ************************************ 00:09:10.679 23:42:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.937 [2024-12-13 23:42:41.447354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:10.937 [2024-12-13 23:42:41.447460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:09:10.937 [2024-12-13 23:42:41.595868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.195 [2024-12-13 23:42:41.734783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.763 Running I/O for 1 seconds... 00:09:12.698 00:09:12.698 Latency(us) 00:09:12.698 [2024-12-13T23:42:43.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.698 [2024-12-13T23:42:43.430Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.698 Nvme0n1p1 : 1.01 9611.13 37.54 0.00 0.00 13286.30 6402.36 26617.70 00:09:12.698 [2024-12-13T23:42:43.430Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.698 Nvme0n1p2 : 1.01 9599.39 37.50 0.00 0.00 13285.88 6251.13 25004.50 00:09:12.698 [2024-12-13T23:42:43.430Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.698 Nvme1n1 : 1.01 9588.56 37.46 0.00 0.00 13277.16 8872.57 25508.63 00:09:12.698 [2024-12-13T23:42:43.430Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.699 Nvme2n1 : 1.02 9577.70 37.41 0.00 0.00 13266.73 9275.86 26012.75 00:09:12.699 [2024-12-13T23:42:43.431Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.699 Nvme2n2 : 1.02 9566.93 37.37 0.00 0.00 13260.90 9175.04 26214.40 00:09:12.699 [2024-12-13T23:42:43.431Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.699 Nvme2n3 : 1.02 9556.14 37.33 0.00 0.00 13249.65 9074.22 26012.75 00:09:12.699 [2024-12-13T23:42:43.431Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:12.699 Nvme3n1 : 1.02 9582.54 37.43 0.00 0.00 13175.04 2318.97 25407.80 00:09:12.699 [2024-12-13T23:42:43.431Z] =================================================================================================================== 00:09:12.699 [2024-12-13T23:42:43.431Z] Total : 67082.39 262.04 0.00 0.00 13257.30 2318.97 26617.70 00:09:13.642 00:09:13.642 real 0m2.698s 00:09:13.642 user 0m2.406s 00:09:13.642 sys 0m0.178s 00:09:13.642 23:42:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.642 ************************************ 00:09:13.642 END TEST bdev_write_zeroes 00:09:13.642 ************************************ 00:09:13.642 23:42:44 -- common/autotest_common.sh@10 -- # set +x 00:09:13.642 23:42:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.642 23:42:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:13.642 23:42:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.642 23:42:44 -- common/autotest_common.sh@10 -- # set +x 00:09:13.642 ************************************ 00:09:13.642 START TEST bdev_json_nonenclosed 00:09:13.642 ************************************ 00:09:13.642 23:42:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.642 [2024-12-13 23:42:44.196270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.642 [2024-12-13 23:42:44.196380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62714 ] 00:09:13.642 [2024-12-13 23:42:44.347841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.904 [2024-12-13 23:42:44.524426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.904 [2024-12-13 23:42:44.524613] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:13.904 [2024-12-13 23:42:44.524645] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.165 00:09:14.165 real 0m0.670s 00:09:14.165 user 0m0.466s 00:09:14.165 sys 0m0.097s 00:09:14.165 23:42:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.165 ************************************ 00:09:14.165 END TEST bdev_json_nonenclosed 00:09:14.165 ************************************ 00:09:14.165 23:42:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.165 23:42:44 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.165 23:42:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:14.165 23:42:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.165 23:42:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.165 ************************************ 00:09:14.165 START TEST bdev_json_nonarray 00:09:14.165 ************************************ 00:09:14.165 23:42:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.426 [2024-12-13 23:42:44.926292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.426 [2024-12-13 23:42:44.926404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62745 ] 00:09:14.426 [2024-12-13 23:42:45.076753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.686 [2024-12-13 23:42:45.256453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.686 [2024-12-13 23:42:45.256642] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:14.686 [2024-12-13 23:42:45.256675] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.946 00:09:14.946 real 0m0.679s 00:09:14.946 user 0m0.466s 00:09:14.946 sys 0m0.108s 00:09:14.946 23:42:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.946 ************************************ 00:09:14.946 END TEST bdev_json_nonarray 00:09:14.946 ************************************ 00:09:14.946 23:42:45 -- common/autotest_common.sh@10 -- # set +x 00:09:14.946 23:42:45 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:14.946 23:42:45 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:14.946 23:42:45 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:14.946 23:42:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.946 23:42:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.946 23:42:45 -- common/autotest_common.sh@10 -- # set +x 00:09:14.946 ************************************ 00:09:14.946 START TEST bdev_gpt_uuid 00:09:14.946 ************************************ 00:09:14.946 23:42:45 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:14.946 23:42:45 -- bdev/blockdev.sh@612 -- # local bdev 00:09:14.946 23:42:45 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:14.946 23:42:45 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62771 00:09:14.946 23:42:45 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:14.946 23:42:45 -- bdev/blockdev.sh@47 -- # waitforlisten 62771 00:09:14.946 23:42:45 -- common/autotest_common.sh@829 -- # '[' -z 62771 ']' 00:09:14.946 23:42:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.947 23:42:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.947 23:42:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.947 23:42:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.947 23:42:45 -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 23:42:45 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:14.947 [2024-12-13 23:42:45.665822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.947 [2024-12-13 23:42:45.665955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:09:15.207 [2024-12-13 23:42:45.812242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.468 [2024-12-13 23:42:45.985988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.468 [2024-12-13 23:42:45.986239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.854 23:42:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.854 23:42:47 -- common/autotest_common.sh@862 -- # return 0 00:09:16.854 23:42:47 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:16.854 23:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.854 23:42:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.854 Some configs were skipped because the RPC state that can call them passed over. 00:09:16.854 23:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.854 23:42:47 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:16.854 23:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.854 23:42:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.854 23:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.854 23:42:47 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:16.854 23:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.854 23:42:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.854 23:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.854 23:42:47 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:16.854 { 00:09:16.854 "name": "Nvme0n1p1", 00:09:16.854 "aliases": [ 00:09:16.854 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:16.854 ], 00:09:16.854 "product_name": "GPT Disk", 00:09:16.854 "block_size": 4096, 00:09:16.854 "num_blocks": 774144, 00:09:16.854 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:16.854 "md_size": 64, 00:09:16.854 "md_interleave": false, 00:09:16.854 "dif_type": 0, 00:09:16.854 "assigned_rate_limits": { 00:09:16.854 "rw_ios_per_sec": 0, 00:09:16.854 "rw_mbytes_per_sec": 0, 00:09:16.854 "r_mbytes_per_sec": 0, 00:09:16.854 "w_mbytes_per_sec": 0 00:09:16.854 }, 00:09:16.854 "claimed": false, 00:09:16.854 "zoned": false, 00:09:16.854 "supported_io_types": { 00:09:16.854 "read": true, 00:09:16.854 "write": true, 00:09:16.854 "unmap": true, 00:09:16.854 "write_zeroes": true, 00:09:16.854 "flush": true, 00:09:16.854 "reset": true, 00:09:16.854 "compare": true, 00:09:16.854 "compare_and_write": false, 00:09:16.854 "abort": true, 00:09:16.854 "nvme_admin": false, 00:09:16.854 "nvme_io": false 00:09:16.854 }, 00:09:16.854 "driver_specific": { 00:09:16.854 "gpt": { 00:09:16.854 "base_bdev": "Nvme0n1", 00:09:16.854 "offset_blocks": 256, 00:09:16.854 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:16.854 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:16.854 "partition_name": "SPDK_TEST_first" 00:09:16.854 } 00:09:16.854 } 00:09:16.854 } 00:09:16.854 ]' 00:09:16.854 23:42:47 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:16.854 23:42:47 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:16.854 23:42:47 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:16.854 23:42:47 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:16.854 23:42:47 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:17.115 23:42:47 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:17.115 23:42:47 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:17.115 23:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.115 23:42:47 -- common/autotest_common.sh@10 -- # set +x 00:09:17.115 23:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.115 23:42:47 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:17.115 { 00:09:17.115 "name": "Nvme0n1p2", 00:09:17.115 "aliases": [ 00:09:17.115 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:17.115 ], 00:09:17.115 "product_name": "GPT Disk", 00:09:17.115 "block_size": 4096, 00:09:17.115 "num_blocks": 774143, 00:09:17.115 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:17.115 "md_size": 64, 00:09:17.115 "md_interleave": false, 00:09:17.115 "dif_type": 0, 00:09:17.115 "assigned_rate_limits": { 00:09:17.115 "rw_ios_per_sec": 0, 00:09:17.115 "rw_mbytes_per_sec": 0, 00:09:17.115 "r_mbytes_per_sec": 0, 00:09:17.115 "w_mbytes_per_sec": 0 00:09:17.115 }, 00:09:17.115 "claimed": false, 00:09:17.115 "zoned": false, 00:09:17.115 "supported_io_types": { 00:09:17.115 "read": true, 00:09:17.115 "write": true, 00:09:17.115 "unmap": true, 00:09:17.115 "write_zeroes": true, 00:09:17.115 "flush": true, 00:09:17.115 "reset": true, 00:09:17.115 "compare": true, 00:09:17.115 "compare_and_write": false, 00:09:17.115 "abort": true, 00:09:17.115 "nvme_admin": false, 00:09:17.115 "nvme_io": false 00:09:17.115 }, 00:09:17.115 "driver_specific": { 00:09:17.115 "gpt": { 00:09:17.115 "base_bdev": "Nvme0n1", 00:09:17.115 "offset_blocks": 774400, 00:09:17.115 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:17.115 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:17.115 "partition_name": "SPDK_TEST_second" 00:09:17.115 } 00:09:17.115 } 00:09:17.115 } 00:09:17.115 ]' 00:09:17.115 23:42:47 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:17.115 23:42:47 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:17.115 23:42:47 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:17.115 23:42:47 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:17.115 23:42:47 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:17.115 23:42:47 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:17.115 23:42:47 -- bdev/blockdev.sh@629 -- # killprocess 62771 00:09:17.115 23:42:47 -- common/autotest_common.sh@936 -- # '[' -z 62771 ']' 00:09:17.115 23:42:47 -- common/autotest_common.sh@940 -- # kill -0 62771 00:09:17.115 23:42:47 -- common/autotest_common.sh@941 -- # uname 00:09:17.115 23:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:17.115 23:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62771 00:09:17.115 23:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:17.115 23:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:17.115 killing process with pid 62771 00:09:17.115 23:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62771' 00:09:17.115 23:42:47 -- common/autotest_common.sh@955 -- # kill 62771 00:09:17.115 23:42:47 -- common/autotest_common.sh@960 -- # wait 62771 00:09:18.519 00:09:18.519 real 0m3.577s 00:09:18.519 user 0m3.878s 00:09:18.519 sys 0m0.358s 00:09:18.519 23:42:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.519 ************************************ 00:09:18.519 END TEST bdev_gpt_uuid 00:09:18.519 ************************************ 00:09:18.519 23:42:49 -- common/autotest_common.sh@10 -- # set +x 00:09:18.519 23:42:49 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:18.519 23:42:49 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:18.519 23:42:49 -- bdev/blockdev.sh@809 -- # cleanup 00:09:18.519 23:42:49 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:18.519 23:42:49 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:18.519 23:42:49 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:18.519 23:42:49 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:18.519 23:42:49 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:18.519 23:42:49 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:19.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.092 Waiting for block devices as requested 00:09:19.092 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.092 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.352 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.352 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.642 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:24.642 23:42:54 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:24.642 23:42:54 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:24.642 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:24.642 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:24.642 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:24.642 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:24.642 23:42:55 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:24.642 00:09:24.642 real 0m59.071s 00:09:24.642 user 1m15.742s 00:09:24.642 sys 0m7.909s 00:09:24.642 23:42:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.642 ************************************ 00:09:24.642 END TEST blockdev_nvme_gpt 00:09:24.642 ************************************ 00:09:24.642 23:42:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.642 23:42:55 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:24.642 23:42:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.642 23:42:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.642 23:42:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.642 ************************************ 00:09:24.643 START TEST nvme 00:09:24.643 ************************************ 00:09:24.643 23:42:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:24.904 * Looking for test storage... 00:09:24.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.904 23:42:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:24.904 23:42:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:24.904 23:42:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:24.904 23:42:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:24.904 23:42:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:24.904 23:42:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:24.904 23:42:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:24.904 23:42:55 -- scripts/common.sh@335 -- # IFS=.-: 00:09:24.904 23:42:55 -- scripts/common.sh@335 -- # read -ra ver1 00:09:24.904 23:42:55 -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.904 23:42:55 -- scripts/common.sh@336 -- # read -ra ver2 00:09:24.904 23:42:55 -- scripts/common.sh@337 -- # local 'op=<' 00:09:24.904 23:42:55 -- scripts/common.sh@339 -- # ver1_l=2 00:09:24.904 23:42:55 -- scripts/common.sh@340 -- # ver2_l=1 00:09:24.904 23:42:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:24.904 23:42:55 -- scripts/common.sh@343 -- # case "$op" in 00:09:24.904 23:42:55 -- scripts/common.sh@344 -- # : 1 00:09:24.905 23:42:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:24.905 23:42:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.905 23:42:55 -- scripts/common.sh@364 -- # decimal 1 00:09:24.905 23:42:55 -- scripts/common.sh@352 -- # local d=1 00:09:24.905 23:42:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.905 23:42:55 -- scripts/common.sh@354 -- # echo 1 00:09:24.905 23:42:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:24.905 23:42:55 -- scripts/common.sh@365 -- # decimal 2 00:09:24.905 23:42:55 -- scripts/common.sh@352 -- # local d=2 00:09:24.905 23:42:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.905 23:42:55 -- scripts/common.sh@354 -- # echo 2 00:09:24.905 23:42:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:24.905 23:42:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:24.905 23:42:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:24.905 23:42:55 -- scripts/common.sh@367 -- # return 0 00:09:24.905 23:42:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.905 23:42:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:24.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.905 --rc genhtml_branch_coverage=1 00:09:24.905 --rc genhtml_function_coverage=1 00:09:24.905 --rc genhtml_legend=1 00:09:24.905 --rc geninfo_all_blocks=1 00:09:24.905 --rc geninfo_unexecuted_blocks=1 00:09:24.905 00:09:24.905 ' 00:09:24.905 23:42:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:24.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.905 --rc genhtml_branch_coverage=1 00:09:24.905 --rc genhtml_function_coverage=1 00:09:24.905 --rc genhtml_legend=1 00:09:24.905 --rc geninfo_all_blocks=1 00:09:24.905 --rc geninfo_unexecuted_blocks=1 00:09:24.905 00:09:24.905 ' 00:09:24.905 23:42:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:24.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.905 --rc genhtml_branch_coverage=1 00:09:24.905 --rc genhtml_function_coverage=1 00:09:24.905 --rc genhtml_legend=1 00:09:24.905 --rc geninfo_all_blocks=1 00:09:24.905 --rc geninfo_unexecuted_blocks=1 00:09:24.905 00:09:24.905 ' 00:09:24.905 23:42:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:24.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.905 --rc genhtml_branch_coverage=1 00:09:24.905 --rc genhtml_function_coverage=1 00:09:24.905 --rc genhtml_legend=1 00:09:24.905 --rc geninfo_all_blocks=1 00:09:24.905 --rc geninfo_unexecuted_blocks=1 00:09:24.905 00:09:24.905 ' 00:09:24.905 23:42:55 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.847 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.847 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.847 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.847 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.847 23:42:56 -- nvme/nvme.sh@79 -- # uname 00:09:25.847 23:42:56 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:25.847 23:42:56 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:25.847 23:42:56 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:25.847 23:42:56 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:25.847 23:42:56 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:25.847 23:42:56 -- common/autotest_common.sh@1055 -- # echo 0 00:09:25.847 Waiting for stub to ready for secondary processes... 00:09:25.847 23:42:56 -- common/autotest_common.sh@1057 -- # stubpid=63443 00:09:25.847 23:42:56 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:25.847 23:42:56 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:25.847 23:42:56 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63443 ]] 00:09:25.847 23:42:56 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:25.847 23:42:56 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:26.109 [2024-12-13 23:42:56.592849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:26.109 [2024-12-13 23:42:56.592953] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.680 [2024-12-13 23:42:57.349131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.941 [2024-12-13 23:42:57.521340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.942 [2024-12-13 23:42:57.521646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.942 [2024-12-13 23:42:57.521724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.942 [2024-12-13 23:42:57.542341] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:26.942 [2024-12-13 23:42:57.556417] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:26.942 [2024-12-13 23:42:57.556756] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:26.942 23:42:57 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:26.942 23:42:57 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63443 ]] 00:09:26.942 23:42:57 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:26.942 [2024-12-13 23:42:57.568542] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:26.942 [2024-12-13 23:42:57.568687] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:26.942 [2024-12-13 23:42:57.568780] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:26.942 [2024-12-13 23:42:57.575539] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:26.942 [2024-12-13 23:42:57.575664] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:26.942 [2024-12-13 23:42:57.575751] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:26.942 [2024-12-13 23:42:57.582737] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:26.942 [2024-12-13 23:42:57.582860] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:26.942 [2024-12-13 23:42:57.582950] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:26.942 [2024-12-13 23:42:57.583027] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:26.942 [2024-12-13 23:42:57.583155] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:27.884 23:42:58 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:27.884 done. 00:09:27.884 23:42:58 -- common/autotest_common.sh@1064 -- # echo done. 00:09:27.884 23:42:58 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:27.884 23:42:58 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:27.884 23:42:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.884 23:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:27.884 ************************************ 00:09:27.884 START TEST nvme_reset 00:09:27.884 ************************************ 00:09:27.884 23:42:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:28.143 Initializing NVMe Controllers 00:09:28.143 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:28.143 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:28.143 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:28.143 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:28.143 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:28.143 00:09:28.143 real 0m0.215s 00:09:28.143 user 0m0.056s 00:09:28.143 sys 0m0.111s 00:09:28.143 23:42:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.143 23:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.143 ************************************ 00:09:28.143 END TEST nvme_reset 00:09:28.143 ************************************ 00:09:28.143 23:42:58 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:28.143 23:42:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.143 23:42:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.143 23:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.143 ************************************ 00:09:28.143 START TEST nvme_identify 00:09:28.143 ************************************ 00:09:28.143 23:42:58 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:28.143 23:42:58 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:28.143 23:42:58 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:28.143 23:42:58 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.143 23:42:58 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:28.143 23:42:58 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:28.143 23:42:58 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:28.143 23:42:58 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.143 23:42:58 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.143 23:42:58 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:28.404 23:42:58 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:28.404 23:42:58 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:28.404 23:42:58 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:28.404 [2024-12-13 23:42:59.054328] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63485 terminated unexpected 00:09:28.404 ===================================================== 00:09:28.404 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:28.404 ===================================================== 00:09:28.404 Controller Capabilities/Features 00:09:28.404 ================================ 00:09:28.404 Vendor ID: 1b36 00:09:28.404 Subsystem Vendor ID: 1af4 00:09:28.404 Serial Number: 12340 00:09:28.404 Model Number: QEMU NVMe Ctrl 00:09:28.404 Firmware Version: 8.0.0 00:09:28.404 Recommended Arb Burst: 6 00:09:28.404 IEEE OUI Identifier: 00 54 52 00:09:28.404 Multi-path I/O 00:09:28.404 May have multiple subsystem ports: No 00:09:28.404 May have multiple controllers: No 00:09:28.404 Associated with SR-IOV VF: No 00:09:28.404 Max Data Transfer Size: 524288 00:09:28.404 Max Number of Namespaces: 256 00:09:28.404 Max Number of I/O Queues: 64 00:09:28.404 NVMe Specification Version (VS): 1.4 00:09:28.404 NVMe Specification Version (Identify): 1.4 00:09:28.404 Maximum Queue Entries: 2048 00:09:28.404 Contiguous Queues Required: Yes 00:09:28.404 Arbitration Mechanisms Supported 00:09:28.404 Weighted Round Robin: Not Supported 00:09:28.404 Vendor Specific: Not Supported 00:09:28.404 Reset Timeout: 7500 ms 00:09:28.404 Doorbell Stride: 4 bytes 00:09:28.404 NVM Subsystem Reset: Not Supported 00:09:28.404 Command Sets Supported 00:09:28.404 NVM Command Set: Supported 00:09:28.404 Boot Partition: Not Supported 00:09:28.404 Memory Page Size Minimum: 4096 bytes 00:09:28.404 Memory Page Size Maximum: 65536 bytes 00:09:28.404 Persistent Memory Region: Not Supported 00:09:28.404 Optional Asynchronous Events Supported 00:09:28.404 Namespace Attribute Notices: Supported 00:09:28.404 Firmware Activation Notices: Not Supported 00:09:28.404 ANA Change Notices: Not Supported 00:09:28.404 PLE Aggregate Log Change Notices: Not Supported 00:09:28.404 LBA Status Info Alert Notices: Not Supported 00:09:28.404 EGE Aggregate Log Change Notices: Not Supported 00:09:28.404 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.404 Zone Descriptor Change Notices: Not Supported 00:09:28.404 Discovery Log Change Notices: Not Supported 00:09:28.404 Controller Attributes 00:09:28.404 128-bit Host Identifier: Not Supported 00:09:28.404 Non-Operational Permissive Mode: Not Supported 00:09:28.404 NVM Sets: Not Supported 00:09:28.404 Read Recovery Levels: Not Supported 00:09:28.404 Endurance Groups: Not Supported 00:09:28.404 Predictable Latency Mode: Not Supported 00:09:28.404 Traffic Based Keep ALive: Not Supported 00:09:28.404 Namespace Granularity: Not Supported 00:09:28.404 SQ Associations: Not Supported 00:09:28.404 UUID List: Not Supported 00:09:28.404 Multi-Domain Subsystem: Not Supported 00:09:28.404 Fixed Capacity Management: Not Supported 00:09:28.404 Variable Capacity Management: Not Supported 00:09:28.404 Delete Endurance Group: Not Supported 00:09:28.404 Delete NVM Set: Not Supported 00:09:28.404 Extended LBA Formats Supported: Supported 00:09:28.404 Flexible Data Placement Supported: Not Supported 00:09:28.404 00:09:28.404 Controller Memory Buffer Support 00:09:28.404 ================================ 00:09:28.405 Supported: No 00:09:28.405 00:09:28.405 Persistent Memory Region Support 00:09:28.405 ================================ 00:09:28.405 Supported: No 00:09:28.405 00:09:28.405 Admin Command Set Attributes 00:09:28.405 ============================ 00:09:28.405 Security Send/Receive: Not Supported 00:09:28.405 Format NVM: Supported 00:09:28.405 Firmware Activate/Download: Not Supported 00:09:28.405 Namespace Management: Supported 00:09:28.405 Device Self-Test: Not Supported 00:09:28.405 Directives: Supported 00:09:28.405 NVMe-MI: Not Supported 00:09:28.405 Virtualization Management: Not Supported 00:09:28.405 Doorbell Buffer Config: Supported 00:09:28.405 Get LBA Status Capability: Not Supported 00:09:28.405 Command & Feature Lockdown Capability: Not Supported 00:09:28.405 Abort Command Limit: 4 00:09:28.405 Async Event Request Limit: 4 00:09:28.405 Number of Firmware Slots: N/A 00:09:28.405 Firmware Slot 1 Read-Only: N/A 00:09:28.405 Firmware Activation Without Reset: N/A 00:09:28.405 Multiple Update Detection Support: N/A 00:09:28.405 Firmware Update Granularity: No Information Provided 00:09:28.405 Per-Namespace SMART Log: Yes 00:09:28.405 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.405 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:28.405 Command Effects Log Page: Supported 00:09:28.405 Get Log Page Extended Data: Supported 00:09:28.405 Telemetry Log Pages: Not Supported 00:09:28.405 Persistent Event Log Pages: Not Supported 00:09:28.405 Supported Log Pages Log Page: May Support 00:09:28.405 Commands Supported & Effects Log Page: Not Supported 00:09:28.405 Feature Identifiers & Effects Log Page:May Support 00:09:28.405 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.405 Data Area 4 for Telemetry Log: Not Supported 00:09:28.405 Error Log Page Entries Supported: 1 00:09:28.405 Keep Alive: Not Supported 00:09:28.405 00:09:28.405 NVM Command Set Attributes 00:09:28.405 ========================== 00:09:28.405 Submission Queue Entry Size 00:09:28.405 Max: 64 00:09:28.405 Min: 64 00:09:28.405 Completion Queue Entry Size 00:09:28.405 Max: 16 00:09:28.405 Min: 16 00:09:28.405 Number of Namespaces: 256 00:09:28.405 Compare Command: Supported 00:09:28.405 Write Uncorrectable Command: Not Supported 00:09:28.405 Dataset Management Command: Supported 00:09:28.405 Write Zeroes Command: Supported 00:09:28.405 Set Features Save Field: Supported 00:09:28.405 Reservations: Not Supported 00:09:28.405 Timestamp: Supported 00:09:28.405 Copy: Supported 00:09:28.405 Volatile Write Cache: Present 00:09:28.405 Atomic Write Unit (Normal): 1 00:09:28.405 Atomic Write Unit (PFail): 1 00:09:28.405 Atomic Compare & Write Unit: 1 00:09:28.405 Fused Compare & Write: Not Supported 00:09:28.405 Scatter-Gather List 00:09:28.405 SGL Command Set: Supported 00:09:28.405 SGL Keyed: Not Supported 00:09:28.405 SGL Bit Bucket Descriptor: Not Supported 00:09:28.405 SGL Metadata Pointer: Not Supported 00:09:28.405 Oversized SGL: Not Supported 00:09:28.405 SGL Metadata Address: Not Supported 00:09:28.405 SGL Offset: Not Supported 00:09:28.405 Transport SGL Data Block: Not Supported 00:09:28.405 Replay Protected Memory Block: Not Supported 00:09:28.405 00:09:28.405 Firmware Slot Information 00:09:28.405 ========================= 00:09:28.405 Active slot: 1 00:09:28.405 Slot 1 Firmware Revision: 1.0 00:09:28.405 00:09:28.405 00:09:28.405 Commands Supported and Effects 00:09:28.405 ============================== 00:09:28.405 Admin Commands 00:09:28.405 -------------- 00:09:28.405 Delete I/O Submission Queue (00h): Supported 00:09:28.405 Create I/O Submission Queue (01h): Supported 00:09:28.405 Get Log Page (02h): Supported 00:09:28.405 Delete I/O Completion Queue (04h): Supported 00:09:28.405 Create I/O Completion Queue (05h): Supported 00:09:28.405 Identify (06h): Supported 00:09:28.405 Abort (08h): Supported 00:09:28.405 Set Features (09h): Supported 00:09:28.405 Get Features (0Ah): Supported 00:09:28.405 Asynchronous Event Request (0Ch): Supported 00:09:28.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.405 Directive Send (19h): Supported 00:09:28.405 Directive Receive (1Ah): Supported 00:09:28.405 Virtualization Management (1Ch): Supported 00:09:28.405 Doorbell Buffer Config (7Ch): Supported 00:09:28.405 Format NVM (80h): Supported LBA-Change 00:09:28.405 I/O Commands 00:09:28.405 ------------ 00:09:28.405 Flush (00h): Supported LBA-Change 00:09:28.405 Write (01h): Supported LBA-Change 00:09:28.405 Read (02h): Supported 00:09:28.405 Compare (05h): Supported 00:09:28.405 Write Zeroes (08h): Supported LBA-Change 00:09:28.405 Dataset Management (09h): Supported LBA-Change 00:09:28.405 Unknown (0Ch): Supported 00:09:28.405 Unknown (12h): Supported 00:09:28.405 Copy (19h): Supported LBA-Change 00:09:28.405 Unknown (1Dh): Supported LBA-Change 00:09:28.405 00:09:28.405 Error Log 00:09:28.405 ========= 00:09:28.405 00:09:28.405 Arbitration 00:09:28.405 =========== 00:09:28.405 Arbitration Burst: no limit 00:09:28.405 00:09:28.405 Power Management 00:09:28.405 ================ 00:09:28.405 Number of Power States: 1 00:09:28.405 Current Power State: Power State #0 00:09:28.405 Power State #0: 00:09:28.405 Max Power: 25.00 W 00:09:28.405 Non-Operational State: Operational 00:09:28.405 Entry Latency: 16 microseconds 00:09:28.405 Exit Latency: 4 microseconds 00:09:28.405 Relative Read Throughput: 0 00:09:28.405 Relative Read Latency: 0 00:09:28.405 Relative Write Throughput: 0 00:09:28.405 Relative Write Latency: 0 00:09:28.405 Idle Power[2024-12-13 23:42:59.056794] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63485 terminated unexpected 00:09:28.405 : Not Reported 00:09:28.405 Active Power: Not Reported 00:09:28.405 Non-Operational Permissive Mode: Not Supported 00:09:28.405 00:09:28.405 Health Information 00:09:28.405 ================== 00:09:28.405 Critical Warnings: 00:09:28.405 Available Spare Space: OK 00:09:28.405 Temperature: OK 00:09:28.405 Device Reliability: OK 00:09:28.405 Read Only: No 00:09:28.405 Volatile Memory Backup: OK 00:09:28.405 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.405 Available Spare: 0% 00:09:28.405 Available Spare Threshold: 0% 00:09:28.405 Life Percentage Used: 0% 00:09:28.405 Data Units Read: 1804 00:09:28.405 Data Units Written: 834 00:09:28.405 Host Read Commands: 89709 00:09:28.405 Host Write Commands: 44665 00:09:28.405 Controller Busy Time: 0 minutes 00:09:28.405 Power Cycles: 0 00:09:28.405 Power On Hours: 0 hours 00:09:28.405 Unsafe Shutdowns: 0 00:09:28.405 Unrecoverable Media Errors: 0 00:09:28.405 Lifetime Error Log Entries: 0 00:09:28.405 Warning Temperature Time: 0 minutes 00:09:28.405 Critical Temperature Time: 0 minutes 00:09:28.405 00:09:28.405 Number of Queues 00:09:28.405 ================ 00:09:28.405 Number of I/O Submission Queues: 64 00:09:28.405 Number of I/O Completion Queues: 64 00:09:28.405 00:09:28.405 ZNS Specific Controller Data 00:09:28.405 ============================ 00:09:28.405 Zone Append Size Limit: 0 00:09:28.405 00:09:28.405 00:09:28.405 Active Namespaces 00:09:28.405 ================= 00:09:28.405 Namespace ID:1 00:09:28.406 Error Recovery Timeout: Unlimited 00:09:28.406 Command Set Identifier: NVM (00h) 00:09:28.406 Deallocate: Supported 00:09:28.406 Deallocated/Unwritten Error: Supported 00:09:28.406 Deallocated Read Value: All 0x00 00:09:28.406 Deallocate in Write Zeroes: Not Supported 00:09:28.406 Deallocated Guard Field: 0xFFFF 00:09:28.406 Flush: Supported 00:09:28.406 Reservation: Not Supported 00:09:28.406 Metadata Transferred as: Separate Metadata Buffer 00:09:28.406 Namespace Sharing Capabilities: Private 00:09:28.406 Size (in LBAs): 1548666 (5GiB) 00:09:28.406 Capacity (in LBAs): 1548666 (5GiB) 00:09:28.406 Utilization (in LBAs): 1548666 (5GiB) 00:09:28.406 Thin Provisioning: Not Supported 00:09:28.406 Per-NS Atomic Units: No 00:09:28.406 Maximum Single Source Range Length: 128 00:09:28.406 Maximum Copy Length: 128 00:09:28.406 Maximum Source Range Count: 128 00:09:28.406 NGUID/EUI64 Never Reused: No 00:09:28.406 Namespace Write Protected: No 00:09:28.406 Number of LBA Formats: 8 00:09:28.406 Current LBA Format: LBA Format #07 00:09:28.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.406 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.406 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.406 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.406 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.406 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.406 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.406 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.406 00:09:28.406 ===================================================== 00:09:28.406 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:28.406 ===================================================== 00:09:28.406 Controller Capabilities/Features 00:09:28.406 ================================ 00:09:28.406 Vendor ID: 1b36 00:09:28.406 Subsystem Vendor ID: 1af4 00:09:28.406 Serial Number: 12341 00:09:28.406 Model Number: QEMU NVMe Ctrl 00:09:28.406 Firmware Version: 8.0.0 00:09:28.406 Recommended Arb Burst: 6 00:09:28.406 IEEE OUI Identifier: 00 54 52 00:09:28.406 Multi-path I/O 00:09:28.406 May have multiple subsystem ports: No 00:09:28.406 May have multiple controllers: No 00:09:28.406 Associated with SR-IOV VF: No 00:09:28.406 Max Data Transfer Size: 524288 00:09:28.406 Max Number of Namespaces: 256 00:09:28.406 Max Number of I/O Queues: 64 00:09:28.406 NVMe Specification Version (VS): 1.4 00:09:28.406 NVMe Specification Version (Identify): 1.4 00:09:28.406 Maximum Queue Entries: 2048 00:09:28.406 Contiguous Queues Required: Yes 00:09:28.406 Arbitration Mechanisms Supported 00:09:28.406 Weighted Round Robin: Not Supported 00:09:28.406 Vendor Specific: Not Supported 00:09:28.406 Reset Timeout: 7500 ms 00:09:28.406 Doorbell Stride: 4 bytes 00:09:28.406 NVM Subsystem Reset: Not Supported 00:09:28.406 Command Sets Supported 00:09:28.406 NVM Command Set: Supported 00:09:28.406 Boot Partition: Not Supported 00:09:28.406 Memory Page Size Minimum: 4096 bytes 00:09:28.406 Memory Page Size Maximum: 65536 bytes 00:09:28.406 Persistent Memory Region: Not Supported 00:09:28.406 Optional Asynchronous Events Supported 00:09:28.406 Namespace Attribute Notices: Supported 00:09:28.406 Firmware Activation Notices: Not Supported 00:09:28.406 ANA Change Notices: Not Supported 00:09:28.406 PLE Aggregate Log Change Notices: Not Supported 00:09:28.406 LBA Status Info Alert Notices: Not Supported 00:09:28.406 EGE Aggregate Log Change Notices: Not Supported 00:09:28.406 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.406 Zone Descriptor Change Notices: Not Supported 00:09:28.406 Discovery Log Change Notices: Not Supported 00:09:28.406 Controller Attributes 00:09:28.406 128-bit Host Identifier: Not Supported 00:09:28.406 Non-Operational Permissive Mode: Not Supported 00:09:28.406 NVM Sets: Not Supported 00:09:28.406 Read Recovery Levels: Not Supported 00:09:28.406 Endurance Groups: Not Supported 00:09:28.406 Predictable Latency Mode: Not Supported 00:09:28.406 Traffic Based Keep ALive: Not Supported 00:09:28.406 Namespace Granularity: Not Supported 00:09:28.406 SQ Associations: Not Supported 00:09:28.406 UUID List: Not Supported 00:09:28.406 Multi-Domain Subsystem: Not Supported 00:09:28.406 Fixed Capacity Management: Not Supported 00:09:28.406 Variable Capacity Management: Not Supported 00:09:28.406 Delete Endurance Group: Not Supported 00:09:28.406 Delete NVM Set: Not Supported 00:09:28.406 Extended LBA Formats Supported: Supported 00:09:28.406 Flexible Data Placement Supported: Not Supported 00:09:28.406 00:09:28.406 Controller Memory Buffer Support 00:09:28.406 ================================ 00:09:28.406 Supported: No 00:09:28.406 00:09:28.406 Persistent Memory Region Support 00:09:28.406 ================================ 00:09:28.406 Supported: No 00:09:28.406 00:09:28.406 Admin Command Set Attributes 00:09:28.406 ============================ 00:09:28.406 Security Send/Receive: Not Supported 00:09:28.406 Format NVM: Supported 00:09:28.406 Firmware Activate/Download: Not Supported 00:09:28.406 Namespace Management: Supported 00:09:28.406 Device Self-Test: Not Supported 00:09:28.406 Directives: Supported 00:09:28.406 NVMe-MI: Not Supported 00:09:28.406 Virtualization Management: Not Supported 00:09:28.406 Doorbell Buffer Config: Supported 00:09:28.406 Get LBA Status Capability: Not Supported 00:09:28.406 Command & Feature Lockdown Capability: Not Supported 00:09:28.406 Abort Command Limit: 4 00:09:28.406 Async Event Request Limit: 4 00:09:28.406 Number of Firmware Slots: N/A 00:09:28.406 Firmware Slot 1 Read-Only: N/A 00:09:28.406 Firmware Activation Without Reset: N/A 00:09:28.406 Multiple Update Detection Support: N/A 00:09:28.406 Firmware Update Granularity: No Information Provided 00:09:28.406 Per-Namespace SMART Log: Yes 00:09:28.406 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.406 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:28.406 Command Effects Log Page: Supported 00:09:28.406 Get Log Page Extended Data: Supported 00:09:28.406 Telemetry Log Pages: Not Supported 00:09:28.406 Persistent Event Log Pages: Not Supported 00:09:28.406 Supported Log Pages Log Page: May Support 00:09:28.406 Commands Supported & Effects Log Page: Not Supported 00:09:28.406 Feature Identifiers & Effects Log Page:May Support 00:09:28.406 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.406 Data Area 4 for Telemetry Log: Not Supported 00:09:28.406 Error Log Page Entries Supported: 1 00:09:28.406 Keep Alive: Not Supported 00:09:28.406 00:09:28.406 NVM Command Set Attributes 00:09:28.406 ========================== 00:09:28.406 Submission Queue Entry Size 00:09:28.406 Max: 64 00:09:28.406 Min: 64 00:09:28.406 Completion Queue Entry Size 00:09:28.406 Max: 16 00:09:28.406 Min: 16 00:09:28.406 Number of Namespaces: 256 00:09:28.406 Compare Command: Supported 00:09:28.406 Write Uncorrectable Command: Not Supported 00:09:28.406 Dataset Management Command: Supported 00:09:28.406 Write Zeroes Command: Supported 00:09:28.406 Set Features Save Field: Supported 00:09:28.406 Reservations: Not Supported 00:09:28.406 Timestamp: Supported 00:09:28.406 Copy: Supported 00:09:28.406 Volatile Write Cache: Present 00:09:28.406 Atomic Write Unit (Normal): 1 00:09:28.406 Atomic Write Unit (PFail): 1 00:09:28.406 Atomic Compare & Write Unit: 1 00:09:28.406 Fused Compare & Write: Not Supported 00:09:28.406 Scatter-Gather List 00:09:28.406 SGL Command Set: Supported 00:09:28.407 SGL Keyed: Not Supported 00:09:28.407 SGL Bit Bucket Descriptor: Not Supported 00:09:28.407 SGL Metadata Pointer: Not Supported 00:09:28.407 Oversized SGL: Not Supported 00:09:28.407 SGL Metadata Address: Not Supported 00:09:28.407 SGL Offset: Not Supported 00:09:28.407 Transport SGL Data Block: Not Supported 00:09:28.407 Replay Protected Memory Block: Not Supported 00:09:28.407 00:09:28.407 Firmware Slot Information 00:09:28.407 ========================= 00:09:28.407 Active slot: 1 00:09:28.407 Slot 1 Firmware Revision: 1.0 00:09:28.407 00:09:28.407 00:09:28.407 Commands Supported and Effects 00:09:28.407 ============================== 00:09:28.407 Admin Commands 00:09:28.407 -------------- 00:09:28.407 Delete I/O Submission Queue (00h): Supported 00:09:28.407 Create I/O Submission Queue (01h): Supported 00:09:28.407 Get Log Page (02h): Supported 00:09:28.407 Delete I/O Completion Queue (04h): Supported 00:09:28.407 Create I/O Completion Queue (05h): Supported 00:09:28.407 Identify (06h): Supported 00:09:28.407 Abort (08h): Supported 00:09:28.407 Set Features (09h): Supported 00:09:28.407 Get Features (0Ah): Supported 00:09:28.407 Asynchronous Event Request (0Ch): Supported 00:09:28.407 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.407 Directive Send (19h): Supported 00:09:28.407 Directive Receive (1Ah): Supported 00:09:28.407 Virtualization Management (1Ch): Supported 00:09:28.407 Doorbell Buffer Config (7Ch): Supported 00:09:28.407 Format NVM (80h): Supported LBA-Change 00:09:28.407 I/O Commands 00:09:28.407 ------------ 00:09:28.407 Flush (00h): Supported LBA-Change 00:09:28.407 Write (01h): Supported LBA-Change 00:09:28.407 Read (02h): Supported 00:09:28.407 Compare (05h): Supported 00:09:28.407 Write Zeroes (08h): Supported LBA-Change 00:09:28.407 Dataset Management (09h): Supported LBA-Change 00:09:28.407 Unknown (0Ch): Supported 00:09:28.407 Unknown (12h): Supported 00:09:28.407 Copy (19h): Supported LBA-Change 00:09:28.407 Unknown (1Dh): Supported LBA-Change 00:09:28.407 00:09:28.407 Error Log 00:09:28.407 ========= 00:09:28.407 00:09:28.407 Arbitration 00:09:28.407 =========== 00:09:28.407 Arbitration Burst: no limit 00:09:28.407 00:09:28.407 Power Management 00:09:28.407 ================ 00:09:28.407 Number of Power States: 1 00:09:28.407 Current Power State: Power State #0 00:09:28.407 Power State #0: 00:09:28.407 Max Power: 25.00 W 00:09:28.407 Non-Operational State: Operational 00:09:28.407 Entry Latency: 16 microseconds 00:09:28.407 Exit Latency: 4 microseconds 00:09:28.407 Relative Read Throughput: 0 00:09:28.407 Relative Read Latency: 0 00:09:28.407 Relative Write Throughput: 0 00:09:28.407 Relative Write Latency: 0 00:09:28.407 Idle Power: Not Reported 00:09:28.407 Active Power: Not Reported 00:09:28.407 Non-Operational Permissive Mode: Not Supported 00:09:28.407 00:09:28.407 Health Information 00:09:28.407 ================== 00:09:28.407 Critical Warnings: 00:09:28.407 Available Spare Space: OK 00:09:28.407 Temperature: OK 00:09:28.407 Device Reliability: OK 00:09:28.407 Read Only: No 00:09:28.407 Volatile Memory Backup: OK 00:09:28.407 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.407 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.407 Available Spare: 0% 00:09:28.407 Available Spare Threshold: 0% 00:09:28.407 Life Percentage Used: 0% 00:09:28.407 Data Units Read: 1233 00:09:28.407 Data Units Written: 569 00:09:28.407 Host Read Commands: 60592 00:09:28.407 Host Write Commands: 29837 00:09:28.407 Controller Busy Time: 0 minutes 00:09:28.407 Power Cycles: 0 00:09:28.407 Power On Hours: 0 hours 00:09:28.407 Unsafe Shutdowns: 0 00:09:28.407 Unrecoverable Media Errors: 0 00:09:28.407 Lifetime Error Log Entries: 0 00:09:28.407 Warning Temperature Time: 0 minutes 00:09:28.407 Critical Temperature Time: 0 minutes 00:09:28.407 00:09:28.407 Number of Queues 00:09:28.407 ================ 00:09:28.407 Number of I/O Submission Queues: 64 00:09:28.407 Number of I/O Completion Queues: 64 00:09:28.407 00:09:28.407 ZNS Specific Controller Data 00:09:28.407 ============================ 00:09:28.407 Zone Append Size Limit: 0 00:09:28.407 00:09:28.407 00:09:28.407 Active Namespaces 00:09:28.407 ================= 00:09:28.407 Namespace ID:1 00:09:28.407 Error Recovery Timeout: Unlimited 00:09:28.407 Command Set Identifier: [2024-12-13 23:42:59.058127] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63485 terminated unexpected 00:09:28.407 NVM (00h) 00:09:28.407 Deallocate: Supported 00:09:28.407 Deallocated/Unwritten Error: Supported 00:09:28.407 Deallocated Read Value: All 0x00 00:09:28.407 Deallocate in Write Zeroes: Not Supported 00:09:28.407 Deallocated Guard Field: 0xFFFF 00:09:28.407 Flush: Supported 00:09:28.407 Reservation: Not Supported 00:09:28.407 Namespace Sharing Capabilities: Private 00:09:28.407 Size (in LBAs): 1310720 (5GiB) 00:09:28.407 Capacity (in LBAs): 1310720 (5GiB) 00:09:28.407 Utilization (in LBAs): 1310720 (5GiB) 00:09:28.407 Thin Provisioning: Not Supported 00:09:28.407 Per-NS Atomic Units: No 00:09:28.407 Maximum Single Source Range Length: 128 00:09:28.407 Maximum Copy Length: 128 00:09:28.407 Maximum Source Range Count: 128 00:09:28.407 NGUID/EUI64 Never Reused: No 00:09:28.407 Namespace Write Protected: No 00:09:28.407 Number of LBA Formats: 8 00:09:28.407 Current LBA Format: LBA Format #04 00:09:28.407 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.407 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.407 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.407 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.407 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.407 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.407 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.407 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.407 00:09:28.407 ===================================================== 00:09:28.407 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:28.407 ===================================================== 00:09:28.407 Controller Capabilities/Features 00:09:28.407 ================================ 00:09:28.407 Vendor ID: 1b36 00:09:28.407 Subsystem Vendor ID: 1af4 00:09:28.407 Serial Number: 12343 00:09:28.407 Model Number: QEMU NVMe Ctrl 00:09:28.407 Firmware Version: 8.0.0 00:09:28.407 Recommended Arb Burst: 6 00:09:28.407 IEEE OUI Identifier: 00 54 52 00:09:28.407 Multi-path I/O 00:09:28.407 May have multiple subsystem ports: No 00:09:28.407 May have multiple controllers: Yes 00:09:28.407 Associated with SR-IOV VF: No 00:09:28.407 Max Data Transfer Size: 524288 00:09:28.407 Max Number of Namespaces: 256 00:09:28.407 Max Number of I/O Queues: 64 00:09:28.407 NVMe Specification Version (VS): 1.4 00:09:28.407 NVMe Specification Version (Identify): 1.4 00:09:28.407 Maximum Queue Entries: 2048 00:09:28.407 Contiguous Queues Required: Yes 00:09:28.407 Arbitration Mechanisms Supported 00:09:28.407 Weighted Round Robin: Not Supported 00:09:28.407 Vendor Specific: Not Supported 00:09:28.407 Reset Timeout: 7500 ms 00:09:28.408 Doorbell Stride: 4 bytes 00:09:28.408 NVM Subsystem Reset: Not Supported 00:09:28.408 Command Sets Supported 00:09:28.408 NVM Command Set: Supported 00:09:28.408 Boot Partition: Not Supported 00:09:28.408 Memory Page Size Minimum: 4096 bytes 00:09:28.408 Memory Page Size Maximum: 65536 bytes 00:09:28.408 Persistent Memory Region: Not Supported 00:09:28.408 Optional Asynchronous Events Supported 00:09:28.408 Namespace Attribute Notices: Supported 00:09:28.408 Firmware Activation Notices: Not Supported 00:09:28.408 ANA Change Notices: Not Supported 00:09:28.408 PLE Aggregate Log Change Notices: Not Supported 00:09:28.408 LBA Status Info Alert Notices: Not Supported 00:09:28.408 EGE Aggregate Log Change Notices: Not Supported 00:09:28.408 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.408 Zone Descriptor Change Notices: Not Supported 00:09:28.408 Discovery Log Change Notices: Not Supported 00:09:28.408 Controller Attributes 00:09:28.408 128-bit Host Identifier: Not Supported 00:09:28.408 Non-Operational Permissive Mode: Not Supported 00:09:28.408 NVM Sets: Not Supported 00:09:28.408 Read Recovery Levels: Not Supported 00:09:28.408 Endurance Groups: Supported 00:09:28.408 Predictable Latency Mode: Not Supported 00:09:28.408 Traffic Based Keep ALive: Not Supported 00:09:28.408 Namespace Granularity: Not Supported 00:09:28.408 SQ Associations: Not Supported 00:09:28.408 UUID List: Not Supported 00:09:28.408 Multi-Domain Subsystem: Not Supported 00:09:28.408 Fixed Capacity Management: Not Supported 00:09:28.408 Variable Capacity Management: Not Supported 00:09:28.408 Delete Endurance Group: Not Supported 00:09:28.408 Delete NVM Set: Not Supported 00:09:28.408 Extended LBA Formats Supported: Supported 00:09:28.408 Flexible Data Placement Supported: Supported 00:09:28.408 00:09:28.408 Controller Memory Buffer Support 00:09:28.408 ================================ 00:09:28.408 Supported: No 00:09:28.408 00:09:28.408 Persistent Memory Region Support 00:09:28.408 ================================ 00:09:28.408 Supported: No 00:09:28.408 00:09:28.408 Admin Command Set Attributes 00:09:28.408 ============================ 00:09:28.408 Security Send/Receive: Not Supported 00:09:28.408 Format NVM: Supported 00:09:28.408 Firmware Activate/Download: Not Supported 00:09:28.408 Namespace Management: Supported 00:09:28.408 Device Self-Test: Not Supported 00:09:28.408 Directives: Supported 00:09:28.408 NVMe-MI: Not Supported 00:09:28.408 Virtualization Management: Not Supported 00:09:28.408 Doorbell Buffer Config: Supported 00:09:28.408 Get LBA Status Capability: Not Supported 00:09:28.408 Command & Feature Lockdown Capability: Not Supported 00:09:28.408 Abort Command Limit: 4 00:09:28.408 Async Event Request Limit: 4 00:09:28.408 Number of Firmware Slots: N/A 00:09:28.408 Firmware Slot 1 Read-Only: N/A 00:09:28.408 Firmware Activation Without Reset: N/A 00:09:28.408 Multiple Update Detection Support: N/A 00:09:28.408 Firmware Update Granularity: No Information Provided 00:09:28.408 Per-Namespace SMART Log: Yes 00:09:28.408 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.408 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.408 Command Effects Log Page: Supported 00:09:28.408 Get Log Page Extended Data: Supported 00:09:28.408 Telemetry Log Pages: Not Supported 00:09:28.408 Persistent Event Log Pages: Not Supported 00:09:28.408 Supported Log Pages Log Page: May Support 00:09:28.408 Commands Supported & Effects Log Page: Not Supported 00:09:28.408 Feature Identifiers & Effects Log Page:May Support 00:09:28.408 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.408 Data Area 4 for Telemetry Log: Not Supported 00:09:28.408 Error Log Page Entries Supported: 1 00:09:28.408 Keep Alive: Not Supported 00:09:28.408 00:09:28.408 NVM Command Set Attributes 00:09:28.408 ========================== 00:09:28.408 Submission Queue Entry Size 00:09:28.408 Max: 64 00:09:28.408 Min: 64 00:09:28.408 Completion Queue Entry Size 00:09:28.408 Max: 16 00:09:28.408 Min: 16 00:09:28.408 Number of Namespaces: 256 00:09:28.408 Compare Command: Supported 00:09:28.408 Write Uncorrectable Command: Not Supported 00:09:28.408 Dataset Management Command: Supported 00:09:28.408 Write Zeroes Command: Supported 00:09:28.408 Set Features Save Field: Supported 00:09:28.408 Reservations: Not Supported 00:09:28.408 Timestamp: Supported 00:09:28.408 Copy: Supported 00:09:28.408 Volatile Write Cache: Present 00:09:28.408 Atomic Write Unit (Normal): 1 00:09:28.408 Atomic Write Unit (PFail): 1 00:09:28.408 Atomic Compare & Write Unit: 1 00:09:28.408 Fused Compare & Write: Not Supported 00:09:28.408 Scatter-Gather List 00:09:28.408 SGL Command Set: Supported 00:09:28.408 SGL Keyed: Not Supported 00:09:28.408 SGL Bit Bucket Descriptor: Not Supported 00:09:28.408 SGL Metadata Pointer: Not Supported 00:09:28.408 Oversized SGL: Not Supported 00:09:28.408 SGL Metadata Address: Not Supported 00:09:28.408 SGL Offset: Not Supported 00:09:28.408 Transport SGL Data Block: Not Supported 00:09:28.408 Replay Protected Memory Block: Not Supported 00:09:28.408 00:09:28.408 Firmware Slot Information 00:09:28.408 ========================= 00:09:28.408 Active slot: 1 00:09:28.408 Slot 1 Firmware Revision: 1.0 00:09:28.408 00:09:28.408 00:09:28.408 Commands Supported and Effects 00:09:28.408 ============================== 00:09:28.408 Admin Commands 00:09:28.408 -------------- 00:09:28.408 Delete I/O Submission Queue (00h): Supported 00:09:28.408 Create I/O Submission Queue (01h): Supported 00:09:28.408 Get Log Page (02h): Supported 00:09:28.408 Delete I/O Completion Queue (04h): Supported 00:09:28.408 Create I/O Completion Queue (05h): Supported 00:09:28.408 Identify (06h): Supported 00:09:28.408 Abort (08h): Supported 00:09:28.427 Set Features (09h): Supported 00:09:28.428 Get Features (0Ah): Supported 00:09:28.428 Asynchronous Event Request (0Ch): Supported 00:09:28.428 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.428 Directive Send (19h): Supported 00:09:28.428 Directive Receive (1Ah): Supported 00:09:28.428 Virtualization Management (1Ch): Supported 00:09:28.428 Doorbell Buffer Config (7Ch): Supported 00:09:28.428 Format NVM (80h): Supported LBA-Change 00:09:28.428 I/O Commands 00:09:28.428 ------------ 00:09:28.428 Flush (00h): Supported LBA-Change 00:09:28.428 Write (01h): Supported LBA-Change 00:09:28.428 Read (02h): Supported 00:09:28.428 Compare (05h): Supported 00:09:28.428 Write Zeroes (08h): Supported LBA-Change 00:09:28.428 Dataset Management (09h): Supported LBA-Change 00:09:28.428 Unknown (0Ch): Supported 00:09:28.428 Unknown (12h): Supported 00:09:28.428 Copy (19h): Supported LBA-Change 00:09:28.428 Unknown (1Dh): Supported LBA-Change 00:09:28.428 00:09:28.428 Error Log 00:09:28.428 ========= 00:09:28.428 00:09:28.428 Arbitration 00:09:28.428 =========== 00:09:28.428 Arbitration Burst: no limit 00:09:28.428 00:09:28.428 Power Management 00:09:28.428 ================ 00:09:28.428 Number of Power States: 1 00:09:28.428 Current Power State: Power State #0 00:09:28.428 Power State #0: 00:09:28.428 Max Power: 25.00 W 00:09:28.428 Non-Operational State: Operational 00:09:28.428 Entry Latency: 16 microseconds 00:09:28.428 Exit Latency: 4 microseconds 00:09:28.428 Relative Read Throughput: 0 00:09:28.428 Relative Read Latency: 0 00:09:28.428 Relative Write Throughput: 0 00:09:28.428 Relative Write Latency: 0 00:09:28.428 Idle Power: Not Reported 00:09:28.428 Active Power: Not Reported 00:09:28.428 Non-Operational Permissive Mode: Not Supported 00:09:28.428 00:09:28.428 Health Information 00:09:28.428 ================== 00:09:28.428 Critical Warnings: 00:09:28.428 Available Spare Space: OK 00:09:28.428 Temperature: OK 00:09:28.428 Device Reliability: OK 00:09:28.428 Read Only: No 00:09:28.428 Volatile Memory Backup: OK 00:09:28.428 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.428 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.428 Available Spare: 0% 00:09:28.428 Available Spare Threshold: 0% 00:09:28.428 Life Percentage Used: 0% 00:09:28.428 Data Units Read: 1467 00:09:28.428 Data Units Written: 685 00:09:28.428 Host Read Commands: 62490 00:09:28.428 Host Write Commands: 30779 00:09:28.428 Controller Busy Time: 0 minutes 00:09:28.428 Power Cycles: 0 00:09:28.428 Power On Hours: 0 hours 00:09:28.428 Unsafe Shutdowns: 0 00:09:28.428 Unrecoverable Media Errors: 0 00:09:28.428 Lifetime Error Log Entries: 0 00:09:28.428 Warning Temperature Time: 0 minutes 00:09:28.428 Critical Temperature Time: 0 minutes 00:09:28.428 00:09:28.428 Number of Queues 00:09:28.428 ================ 00:09:28.428 Number of I/O Submission Queues: 64 00:09:28.428 Number of I/O Completion Queues: 64 00:09:28.428 00:09:28.428 ZNS Specific Controller Data 00:09:28.428 ============================ 00:09:28.428 Zone Append Size Limit: 0 00:09:28.428 00:09:28.428 00:09:28.428 Active Namespaces 00:09:28.428 ================= 00:09:28.428 Namespace ID:1 00:09:28.428 Error Recovery Timeout: Unlimited 00:09:28.428 Command Set Identifier: NVM (00h) 00:09:28.428 Deallocate: Supported 00:09:28.428 Deallocated/Unwritten Error: Supported 00:09:28.428 Deallocated Read Value: All 0x00 00:09:28.428 Deallocate in Write Zeroes: Not Supported 00:09:28.428 Deallocated Guard Field: 0xFFFF 00:09:28.428 Flush: Supported 00:09:28.428 Reservation: Not Supported 00:09:28.428 Namespace Sharing Capabilities: Multiple Controllers 00:09:28.428 Size (in LBAs): 262144 (1GiB) 00:09:28.428 Capacity (in LBAs): 262144 (1GiB) 00:09:28.428 Utilization (in LBAs): 262144 (1GiB) 00:09:28.428 Thin Provisioning: Not Supported 00:09:28.428 Per-NS Atomic Units: No 00:09:28.428 Maximum Single Source Range Length: 128 00:09:28.428 Maximum Copy Length: 128 00:09:28.428 Maximum Source Range Count: 128 00:09:28.428 NGUID/EUI64 Never Reused: No 00:09:28.428 Namespace Write Protected: No 00:09:28.428 Endurance group ID: 1 00:09:28.428 Number of LBA Formats: 8 00:09:28.428 Current LBA Format: LBA Format #04 00:09:28.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.428 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.428 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.428 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.428 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.428 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.428 LBA Format #06: Data Si[2024-12-13 23:42:59.059977] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63485 terminated unexpected 00:09:28.428 ze: 4096 Metadata Size: 16 00:09:28.428 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.428 00:09:28.428 Get Feature FDP: 00:09:28.428 ================ 00:09:28.428 Enabled: Yes 00:09:28.428 FDP configuration index: 0 00:09:28.428 00:09:28.428 FDP configurations log page 00:09:28.428 =========================== 00:09:28.428 Number of FDP configurations: 1 00:09:28.428 Version: 0 00:09:28.428 Size: 112 00:09:28.428 FDP Configuration Descriptor: 0 00:09:28.428 Descriptor Size: 96 00:09:28.428 Reclaim Group Identifier format: 2 00:09:28.428 FDP Volatile Write Cache: Not Present 00:09:28.428 FDP Configuration: Valid 00:09:28.428 Vendor Specific Size: 0 00:09:28.428 Number of Reclaim Groups: 2 00:09:28.428 Number of Recalim Unit Handles: 8 00:09:28.428 Max Placement Identifiers: 128 00:09:28.428 Number of Namespaces Suppprted: 256 00:09:28.428 Reclaim unit Nominal Size: 6000000 bytes 00:09:28.428 Estimated Reclaim Unit Time Limit: Not Reported 00:09:28.428 RUH Desc #000: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #001: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #002: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #003: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #004: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #005: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #006: RUH Type: Initially Isolated 00:09:28.428 RUH Desc #007: RUH Type: Initially Isolated 00:09:28.428 00:09:28.428 FDP reclaim unit handle usage log page 00:09:28.428 ====================================== 00:09:28.428 Number of Reclaim Unit Handles: 8 00:09:28.428 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:28.428 RUH Usage Desc #001: RUH Attributes: Unused 00:09:28.428 RUH Usage Desc #002: RUH Attributes: Unused 00:09:28.428 RUH Usage Desc #003: RUH Attributes: Unused 00:09:28.428 RUH Usage Desc #004: RUH Attributes: Unused 00:09:28.428 RUH Usage Desc #005: RUH Attributes: Unused 00:09:28.428 RUH Usage Desc #006: RUH Attributes: Unused 00:09:28.428 RUH Usage Desc #007: RUH Attributes: Unused 00:09:28.428 00:09:28.428 FDP statistics log page 00:09:28.428 ======================= 00:09:28.428 Host bytes with metadata written: 441798656 00:09:28.428 Media bytes with metadata written: 441909248 00:09:28.428 Media bytes erased: 0 00:09:28.428 00:09:28.428 FDP events log page 00:09:28.428 =================== 00:09:28.428 Number of FDP events: 0 00:09:28.428 00:09:28.428 ===================================================== 00:09:28.429 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:28.429 ===================================================== 00:09:28.429 Controller Capabilities/Features 00:09:28.429 ================================ 00:09:28.429 Vendor ID: 1b36 00:09:28.429 Subsystem Vendor ID: 1af4 00:09:28.429 Serial Number: 12342 00:09:28.429 Model Number: QEMU NVMe Ctrl 00:09:28.429 Firmware Version: 8.0.0 00:09:28.429 Recommended Arb Burst: 6 00:09:28.429 IEEE OUI Identifier: 00 54 52 00:09:28.429 Multi-path I/O 00:09:28.429 May have multiple subsystem ports: No 00:09:28.429 May have multiple controllers: No 00:09:28.429 Associated with SR-IOV VF: No 00:09:28.429 Max Data Transfer Size: 524288 00:09:28.429 Max Number of Namespaces: 256 00:09:28.429 Max Number of I/O Queues: 64 00:09:28.429 NVMe Specification Version (VS): 1.4 00:09:28.429 NVMe Specification Version (Identify): 1.4 00:09:28.429 Maximum Queue Entries: 2048 00:09:28.429 Contiguous Queues Required: Yes 00:09:28.429 Arbitration Mechanisms Supported 00:09:28.429 Weighted Round Robin: Not Supported 00:09:28.429 Vendor Specific: Not Supported 00:09:28.429 Reset Timeout: 7500 ms 00:09:28.429 Doorbell Stride: 4 bytes 00:09:28.429 NVM Subsystem Reset: Not Supported 00:09:28.429 Command Sets Supported 00:09:28.429 NVM Command Set: Supported 00:09:28.429 Boot Partition: Not Supported 00:09:28.429 Memory Page Size Minimum: 4096 bytes 00:09:28.429 Memory Page Size Maximum: 65536 bytes 00:09:28.429 Persistent Memory Region: Not Supported 00:09:28.429 Optional Asynchronous Events Supported 00:09:28.429 Namespace Attribute Notices: Supported 00:09:28.429 Firmware Activation Notices: Not Supported 00:09:28.429 ANA Change Notices: Not Supported 00:09:28.429 PLE Aggregate Log Change Notices: Not Supported 00:09:28.429 LBA Status Info Alert Notices: Not Supported 00:09:28.429 EGE Aggregate Log Change Notices: Not Supported 00:09:28.429 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.429 Zone Descriptor Change Notices: Not Supported 00:09:28.429 Discovery Log Change Notices: Not Supported 00:09:28.429 Controller Attributes 00:09:28.429 128-bit Host Identifier: Not Supported 00:09:28.429 Non-Operational Permissive Mode: Not Supported 00:09:28.429 NVM Sets: Not Supported 00:09:28.429 Read Recovery Levels: Not Supported 00:09:28.429 Endurance Groups: Not Supported 00:09:28.429 Predictable Latency Mode: Not Supported 00:09:28.429 Traffic Based Keep ALive: Not Supported 00:09:28.429 Namespace Granularity: Not Supported 00:09:28.429 SQ Associations: Not Supported 00:09:28.429 UUID List: Not Supported 00:09:28.429 Multi-Domain Subsystem: Not Supported 00:09:28.429 Fixed Capacity Management: Not Supported 00:09:28.429 Variable Capacity Management: Not Supported 00:09:28.429 Delete Endurance Group: Not Supported 00:09:28.429 Delete NVM Set: Not Supported 00:09:28.429 Extended LBA Formats Supported: Supported 00:09:28.429 Flexible Data Placement Supported: Not Supported 00:09:28.429 00:09:28.429 Controller Memory Buffer Support 00:09:28.429 ================================ 00:09:28.429 Supported: No 00:09:28.429 00:09:28.429 Persistent Memory Region Support 00:09:28.429 ================================ 00:09:28.429 Supported: No 00:09:28.429 00:09:28.429 Admin Command Set Attributes 00:09:28.429 ============================ 00:09:28.429 Security Send/Receive: Not Supported 00:09:28.429 Format NVM: Supported 00:09:28.429 Firmware Activate/Download: Not Supported 00:09:28.429 Namespace Management: Supported 00:09:28.429 Device Self-Test: Not Supported 00:09:28.429 Directives: Supported 00:09:28.429 NVMe-MI: Not Supported 00:09:28.429 Virtualization Management: Not Supported 00:09:28.429 Doorbell Buffer Config: Supported 00:09:28.429 Get LBA Status Capability: Not Supported 00:09:28.429 Command & Feature Lockdown Capability: Not Supported 00:09:28.429 Abort Command Limit: 4 00:09:28.429 Async Event Request Limit: 4 00:09:28.429 Number of Firmware Slots: N/A 00:09:28.429 Firmware Slot 1 Read-Only: N/A 00:09:28.429 Firmware Activation Without Reset: N/A 00:09:28.429 Multiple Update Detection Support: N/A 00:09:28.429 Firmware Update Granularity: No Information Provided 00:09:28.429 Per-Namespace SMART Log: Yes 00:09:28.429 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.429 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:28.429 Command Effects Log Page: Supported 00:09:28.429 Get Log Page Extended Data: Supported 00:09:28.429 Telemetry Log Pages: Not Supported 00:09:28.429 Persistent Event Log Pages: Not Supported 00:09:28.429 Supported Log Pages Log Page: May Support 00:09:28.429 Commands Supported & Effects Log Page: Not Supported 00:09:28.429 Feature Identifiers & Effects Log Page:May Support 00:09:28.429 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.429 Data Area 4 for Telemetry Log: Not Supported 00:09:28.429 Error Log Page Entries Supported: 1 00:09:28.429 Keep Alive: Not Supported 00:09:28.429 00:09:28.429 NVM Command Set Attributes 00:09:28.429 ========================== 00:09:28.429 Submission Queue Entry Size 00:09:28.429 Max: 64 00:09:28.429 Min: 64 00:09:28.429 Completion Queue Entry Size 00:09:28.429 Max: 16 00:09:28.429 Min: 16 00:09:28.429 Number of Namespaces: 256 00:09:28.429 Compare Command: Supported 00:09:28.429 Write Uncorrectable Command: Not Supported 00:09:28.429 Dataset Management Command: Supported 00:09:28.429 Write Zeroes Command: Supported 00:09:28.429 Set Features Save Field: Supported 00:09:28.429 Reservations: Not Supported 00:09:28.429 Timestamp: Supported 00:09:28.429 Copy: Supported 00:09:28.429 Volatile Write Cache: Present 00:09:28.429 Atomic Write Unit (Normal): 1 00:09:28.429 Atomic Write Unit (PFail): 1 00:09:28.429 Atomic Compare & Write Unit: 1 00:09:28.429 Fused Compare & Write: Not Supported 00:09:28.429 Scatter-Gather List 00:09:28.429 SGL Command Set: Supported 00:09:28.429 SGL Keyed: Not Supported 00:09:28.430 SGL Bit Bucket Descriptor: Not Supported 00:09:28.430 SGL Metadata Pointer: Not Supported 00:09:28.430 Oversized SGL: Not Supported 00:09:28.430 SGL Metadata Address: Not Supported 00:09:28.430 SGL Offset: Not Supported 00:09:28.430 Transport SGL Data Block: Not Supported 00:09:28.430 Replay Protected Memory Block: Not Supported 00:09:28.430 00:09:28.430 Firmware Slot Information 00:09:28.430 ========================= 00:09:28.430 Active slot: 1 00:09:28.430 Slot 1 Firmware Revision: 1.0 00:09:28.430 00:09:28.430 00:09:28.430 Commands Supported and Effects 00:09:28.430 ============================== 00:09:28.430 Admin Commands 00:09:28.430 -------------- 00:09:28.430 Delete I/O Submission Queue (00h): Supported 00:09:28.430 Create I/O Submission Queue (01h): Supported 00:09:28.430 Get Log Page (02h): Supported 00:09:28.430 Delete I/O Completion Queue (04h): Supported 00:09:28.430 Create I/O Completion Queue (05h): Supported 00:09:28.430 Identify (06h): Supported 00:09:28.430 Abort (08h): Supported 00:09:28.430 Set Features (09h): Supported 00:09:28.430 Get Features (0Ah): Supported 00:09:28.430 Asynchronous Event Request (0Ch): Supported 00:09:28.430 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.430 Directive Send (19h): Supported 00:09:28.430 Directive Receive (1Ah): Supported 00:09:28.430 Virtualization Management (1Ch): Supported 00:09:28.430 Doorbell Buffer Config (7Ch): Supported 00:09:28.430 Format NVM (80h): Supported LBA-Change 00:09:28.430 I/O Commands 00:09:28.430 ------------ 00:09:28.430 Flush (00h): Supported LBA-Change 00:09:28.430 Write (01h): Supported LBA-Change 00:09:28.430 Read (02h): Supported 00:09:28.430 Compare (05h): Supported 00:09:28.430 Write Zeroes (08h): Supported LBA-Change 00:09:28.430 Dataset Management (09h): Supported LBA-Change 00:09:28.430 Unknown (0Ch): Supported 00:09:28.430 Unknown (12h): Supported 00:09:28.430 Copy (19h): Supported LBA-Change 00:09:28.430 Unknown (1Dh): Supported LBA-Change 00:09:28.430 00:09:28.430 Error Log 00:09:28.430 ========= 00:09:28.430 00:09:28.430 Arbitration 00:09:28.430 =========== 00:09:28.430 Arbitration Burst: no limit 00:09:28.430 00:09:28.430 Power Management 00:09:28.430 ================ 00:09:28.430 Number of Power States: 1 00:09:28.430 Current Power State: Power State #0 00:09:28.430 Power State #0: 00:09:28.430 Max Power: 25.00 W 00:09:28.430 Non-Operational State: Operational 00:09:28.430 Entry Latency: 16 microseconds 00:09:28.430 Exit Latency: 4 microseconds 00:09:28.430 Relative Read Throughput: 0 00:09:28.430 Relative Read Latency: 0 00:09:28.430 Relative Write Throughput: 0 00:09:28.430 Relative Write Latency: 0 00:09:28.430 Idle Power: Not Reported 00:09:28.430 Active Power: Not Reported 00:09:28.430 Non-Operational Permissive Mode: Not Supported 00:09:28.430 00:09:28.430 Health Information 00:09:28.430 ================== 00:09:28.430 Critical Warnings: 00:09:28.430 Available Spare Space: OK 00:09:28.430 Temperature: OK 00:09:28.430 Device Reliability: OK 00:09:28.430 Read Only: No 00:09:28.430 Volatile Memory Backup: OK 00:09:28.430 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.430 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.430 Available Spare: 0% 00:09:28.430 Available Spare Threshold: 0% 00:09:28.430 Life Percentage Used: 0% 00:09:28.430 Data Units Read: 3901 00:09:28.430 Data Units Written: 1801 00:09:28.430 Host Read Commands: 183763 00:09:28.430 Host Write Commands: 90336 00:09:28.430 Controller Busy Time: 0 minutes 00:09:28.430 Power Cycles: 0 00:09:28.430 Power On Hours: 0 hours 00:09:28.430 Unsafe Shutdowns: 0 00:09:28.430 Unrecoverable Media Errors: 0 00:09:28.430 Lifetime Error Log Entries: 0 00:09:28.430 Warning Temperature Time: 0 minutes 00:09:28.430 Critical Temperature Time: 0 minutes 00:09:28.430 00:09:28.430 Number of Queues 00:09:28.430 ================ 00:09:28.430 Number of I/O Submission Queues: 64 00:09:28.430 Number of I/O Completion Queues: 64 00:09:28.430 00:09:28.430 ZNS Specific Controller Data 00:09:28.430 ============================ 00:09:28.430 Zone Append Size Limit: 0 00:09:28.430 00:09:28.430 00:09:28.430 Active Namespaces 00:09:28.430 ================= 00:09:28.430 Namespace ID:1 00:09:28.430 Error Recovery Timeout: Unlimited 00:09:28.430 Command Set Identifier: NVM (00h) 00:09:28.430 Deallocate: Supported 00:09:28.430 Deallocated/Unwritten Error: Supported 00:09:28.430 Deallocated Read Value: All 0x00 00:09:28.430 Deallocate in Write Zeroes: Not Supported 00:09:28.430 Deallocated Guard Field: 0xFFFF 00:09:28.430 Flush: Supported 00:09:28.430 Reservation: Not Supported 00:09:28.430 Namespace Sharing Capabilities: Private 00:09:28.430 Size (in LBAs): 1048576 (4GiB) 00:09:28.430 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.430 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.430 Thin Provisioning: Not Supported 00:09:28.430 Per-NS Atomic Units: No 00:09:28.430 Maximum Single Source Range Length: 128 00:09:28.430 Maximum Copy Length: 128 00:09:28.430 Maximum Source Range Count: 128 00:09:28.430 NGUID/EUI64 Never Reused: No 00:09:28.430 Namespace Write Protected: No 00:09:28.430 Number of LBA Formats: 8 00:09:28.430 Current LBA Format: LBA Format #04 00:09:28.430 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.430 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.430 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.430 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.430 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.430 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.430 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.430 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.430 00:09:28.430 Namespace ID:2 00:09:28.430 Error Recovery Timeout: Unlimited 00:09:28.430 Command Set Identifier: NVM (00h) 00:09:28.430 Deallocate: Supported 00:09:28.430 Deallocated/Unwritten Error: Supported 00:09:28.430 Deallocated Read Value: All 0x00 00:09:28.430 Deallocate in Write Zeroes: Not Supported 00:09:28.430 Deallocated Guard Field: 0xFFFF 00:09:28.430 Flush: Supported 00:09:28.430 Reservation: Not Supported 00:09:28.430 Namespace Sharing Capabilities: Private 00:09:28.430 Size (in LBAs): 1048576 (4GiB) 00:09:28.430 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.430 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.430 Thin Provisioning: Not Supported 00:09:28.430 Per-NS Atomic Units: No 00:09:28.430 Maximum Single Source Range Length: 128 00:09:28.430 Maximum Copy Length: 128 00:09:28.430 Maximum Source Range Count: 128 00:09:28.430 NGUID/EUI64 Never Reused: No 00:09:28.430 Namespace Write Protected: No 00:09:28.430 Number of LBA Formats: 8 00:09:28.430 Current LBA Format: LBA Format #04 00:09:28.430 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.430 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.430 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.430 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.430 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.430 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.430 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.430 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.430 00:09:28.431 Namespace ID:3 00:09:28.431 Error Recovery Timeout: Unlimited 00:09:28.431 Command Set Identifier: NVM (00h) 00:09:28.431 Deallocate: Supported 00:09:28.431 Deallocated/Unwritten Error: Supported 00:09:28.431 Deallocated Read Value: All 0x00 00:09:28.431 Deallocate in Write Zeroes: Not Supported 00:09:28.431 Deallocated Guard Field: 0xFFFF 00:09:28.431 Flush: Supported 00:09:28.431 Reservation: Not Supported 00:09:28.431 Namespace Sharing Capabilities: Private 00:09:28.431 Size (in LBAs): 1048576 (4GiB) 00:09:28.431 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.431 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.431 Thin Provisioning: Not Supported 00:09:28.431 Per-NS Atomic Units: No 00:09:28.431 Maximum Single Source Range Length: 128 00:09:28.431 Maximum Copy Length: 128 00:09:28.431 Maximum Source Range Count: 128 00:09:28.431 NGUID/EUI64 Never Reused: No 00:09:28.431 Namespace Write Protected: No 00:09:28.431 Number of LBA Formats: 8 00:09:28.431 Current LBA Format: LBA Format #04 00:09:28.431 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.431 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.431 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.431 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.431 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.431 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.431 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.431 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.431 00:09:28.431 23:42:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:28.431 23:42:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:28.690 ===================================================== 00:09:28.690 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:28.690 ===================================================== 00:09:28.690 Controller Capabilities/Features 00:09:28.690 ================================ 00:09:28.690 Vendor ID: 1b36 00:09:28.690 Subsystem Vendor ID: 1af4 00:09:28.690 Serial Number: 12340 00:09:28.690 Model Number: QEMU NVMe Ctrl 00:09:28.690 Firmware Version: 8.0.0 00:09:28.690 Recommended Arb Burst: 6 00:09:28.690 IEEE OUI Identifier: 00 54 52 00:09:28.690 Multi-path I/O 00:09:28.690 May have multiple subsystem ports: No 00:09:28.690 May have multiple controllers: No 00:09:28.690 Associated with SR-IOV VF: No 00:09:28.690 Max Data Transfer Size: 524288 00:09:28.690 Max Number of Namespaces: 256 00:09:28.690 Max Number of I/O Queues: 64 00:09:28.690 NVMe Specification Version (VS): 1.4 00:09:28.690 NVMe Specification Version (Identify): 1.4 00:09:28.690 Maximum Queue Entries: 2048 00:09:28.690 Contiguous Queues Required: Yes 00:09:28.690 Arbitration Mechanisms Supported 00:09:28.690 Weighted Round Robin: Not Supported 00:09:28.690 Vendor Specific: Not Supported 00:09:28.690 Reset Timeout: 7500 ms 00:09:28.690 Doorbell Stride: 4 bytes 00:09:28.690 NVM Subsystem Reset: Not Supported 00:09:28.690 Command Sets Supported 00:09:28.690 NVM Command Set: Supported 00:09:28.690 Boot Partition: Not Supported 00:09:28.690 Memory Page Size Minimum: 4096 bytes 00:09:28.690 Memory Page Size Maximum: 65536 bytes 00:09:28.690 Persistent Memory Region: Not Supported 00:09:28.690 Optional Asynchronous Events Supported 00:09:28.690 Namespace Attribute Notices: Supported 00:09:28.690 Firmware Activation Notices: Not Supported 00:09:28.690 ANA Change Notices: Not Supported 00:09:28.690 PLE Aggregate Log Change Notices: Not Supported 00:09:28.690 LBA Status Info Alert Notices: Not Supported 00:09:28.691 EGE Aggregate Log Change Notices: Not Supported 00:09:28.691 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.691 Zone Descriptor Change Notices: Not Supported 00:09:28.691 Discovery Log Change Notices: Not Supported 00:09:28.691 Controller Attributes 00:09:28.691 128-bit Host Identifier: Not Supported 00:09:28.691 Non-Operational Permissive Mode: Not Supported 00:09:28.691 NVM Sets: Not Supported 00:09:28.691 Read Recovery Levels: Not Supported 00:09:28.691 Endurance Groups: Not Supported 00:09:28.691 Predictable Latency Mode: Not Supported 00:09:28.691 Traffic Based Keep ALive: Not Supported 00:09:28.691 Namespace Granularity: Not Supported 00:09:28.691 SQ Associations: Not Supported 00:09:28.691 UUID List: Not Supported 00:09:28.691 Multi-Domain Subsystem: Not Supported 00:09:28.691 Fixed Capacity Management: Not Supported 00:09:28.691 Variable Capacity Management: Not Supported 00:09:28.691 Delete Endurance Group: Not Supported 00:09:28.691 Delete NVM Set: Not Supported 00:09:28.691 Extended LBA Formats Supported: Supported 00:09:28.691 Flexible Data Placement Supported: Not Supported 00:09:28.691 00:09:28.691 Controller Memory Buffer Support 00:09:28.691 ================================ 00:09:28.691 Supported: No 00:09:28.691 00:09:28.691 Persistent Memory Region Support 00:09:28.691 ================================ 00:09:28.691 Supported: No 00:09:28.691 00:09:28.691 Admin Command Set Attributes 00:09:28.691 ============================ 00:09:28.691 Security Send/Receive: Not Supported 00:09:28.691 Format NVM: Supported 00:09:28.691 Firmware Activate/Download: Not Supported 00:09:28.691 Namespace Management: Supported 00:09:28.691 Device Self-Test: Not Supported 00:09:28.691 Directives: Supported 00:09:28.691 NVMe-MI: Not Supported 00:09:28.691 Virtualization Management: Not Supported 00:09:28.691 Doorbell Buffer Config: Supported 00:09:28.691 Get LBA Status Capability: Not Supported 00:09:28.691 Command & Feature Lockdown Capability: Not Supported 00:09:28.691 Abort Command Limit: 4 00:09:28.691 Async Event Request Limit: 4 00:09:28.691 Number of Firmware Slots: N/A 00:09:28.691 Firmware Slot 1 Read-Only: N/A 00:09:28.691 Firmware Activation Without Reset: N/A 00:09:28.691 Multiple Update Detection Support: N/A 00:09:28.691 Firmware Update Granularity: No Information Provided 00:09:28.691 Per-Namespace SMART Log: Yes 00:09:28.691 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.691 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:28.691 Command Effects Log Page: Supported 00:09:28.691 Get Log Page Extended Data: Supported 00:09:28.691 Telemetry Log Pages: Not Supported 00:09:28.691 Persistent Event Log Pages: Not Supported 00:09:28.691 Supported Log Pages Log Page: May Support 00:09:28.691 Commands Supported & Effects Log Page: Not Supported 00:09:28.691 Feature Identifiers & Effects Log Page:May Support 00:09:28.691 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.691 Data Area 4 for Telemetry Log: Not Supported 00:09:28.691 Error Log Page Entries Supported: 1 00:09:28.691 Keep Alive: Not Supported 00:09:28.691 00:09:28.691 NVM Command Set Attributes 00:09:28.691 ========================== 00:09:28.691 Submission Queue Entry Size 00:09:28.691 Max: 64 00:09:28.691 Min: 64 00:09:28.691 Completion Queue Entry Size 00:09:28.691 Max: 16 00:09:28.691 Min: 16 00:09:28.691 Number of Namespaces: 256 00:09:28.691 Compare Command: Supported 00:09:28.691 Write Uncorrectable Command: Not Supported 00:09:28.691 Dataset Management Command: Supported 00:09:28.691 Write Zeroes Command: Supported 00:09:28.691 Set Features Save Field: Supported 00:09:28.691 Reservations: Not Supported 00:09:28.691 Timestamp: Supported 00:09:28.691 Copy: Supported 00:09:28.691 Volatile Write Cache: Present 00:09:28.691 Atomic Write Unit (Normal): 1 00:09:28.691 Atomic Write Unit (PFail): 1 00:09:28.691 Atomic Compare & Write Unit: 1 00:09:28.691 Fused Compare & Write: Not Supported 00:09:28.691 Scatter-Gather List 00:09:28.691 SGL Command Set: Supported 00:09:28.691 SGL Keyed: Not Supported 00:09:28.691 SGL Bit Bucket Descriptor: Not Supported 00:09:28.691 SGL Metadata Pointer: Not Supported 00:09:28.691 Oversized SGL: Not Supported 00:09:28.691 SGL Metadata Address: Not Supported 00:09:28.691 SGL Offset: Not Supported 00:09:28.691 Transport SGL Data Block: Not Supported 00:09:28.691 Replay Protected Memory Block: Not Supported 00:09:28.691 00:09:28.691 Firmware Slot Information 00:09:28.691 ========================= 00:09:28.691 Active slot: 1 00:09:28.691 Slot 1 Firmware Revision: 1.0 00:09:28.691 00:09:28.691 00:09:28.691 Commands Supported and Effects 00:09:28.691 ============================== 00:09:28.691 Admin Commands 00:09:28.691 -------------- 00:09:28.691 Delete I/O Submission Queue (00h): Supported 00:09:28.691 Create I/O Submission Queue (01h): Supported 00:09:28.691 Get Log Page (02h): Supported 00:09:28.691 Delete I/O Completion Queue (04h): Supported 00:09:28.691 Create I/O Completion Queue (05h): Supported 00:09:28.691 Identify (06h): Supported 00:09:28.691 Abort (08h): Supported 00:09:28.691 Set Features (09h): Supported 00:09:28.691 Get Features (0Ah): Supported 00:09:28.691 Asynchronous Event Request (0Ch): Supported 00:09:28.691 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.691 Directive Send (19h): Supported 00:09:28.691 Directive Receive (1Ah): Supported 00:09:28.691 Virtualization Management (1Ch): Supported 00:09:28.691 Doorbell Buffer Config (7Ch): Supported 00:09:28.691 Format NVM (80h): Supported LBA-Change 00:09:28.691 I/O Commands 00:09:28.691 ------------ 00:09:28.691 Flush (00h): Supported LBA-Change 00:09:28.691 Write (01h): Supported LBA-Change 00:09:28.691 Read (02h): Supported 00:09:28.691 Compare (05h): Supported 00:09:28.691 Write Zeroes (08h): Supported LBA-Change 00:09:28.691 Dataset Management (09h): Supported LBA-Change 00:09:28.691 Unknown (0Ch): Supported 00:09:28.691 Unknown (12h): Supported 00:09:28.691 Copy (19h): Supported LBA-Change 00:09:28.691 Unknown (1Dh): Supported LBA-Change 00:09:28.691 00:09:28.691 Error Log 00:09:28.691 ========= 00:09:28.691 00:09:28.691 Arbitration 00:09:28.691 =========== 00:09:28.691 Arbitration Burst: no limit 00:09:28.691 00:09:28.691 Power Management 00:09:28.691 ================ 00:09:28.691 Number of Power States: 1 00:09:28.691 Current Power State: Power State #0 00:09:28.691 Power State #0: 00:09:28.691 Max Power: 25.00 W 00:09:28.691 Non-Operational State: Operational 00:09:28.691 Entry Latency: 16 microseconds 00:09:28.691 Exit Latency: 4 microseconds 00:09:28.691 Relative Read Throughput: 0 00:09:28.691 Relative Read Latency: 0 00:09:28.691 Relative Write Throughput: 0 00:09:28.691 Relative Write Latency: 0 00:09:28.691 Idle Power: Not Reported 00:09:28.691 Active Power: Not Reported 00:09:28.691 Non-Operational Permissive Mode: Not Supported 00:09:28.691 00:09:28.691 Health Information 00:09:28.691 ================== 00:09:28.691 Critical Warnings: 00:09:28.691 Available Spare Space: OK 00:09:28.691 Temperature: OK 00:09:28.691 Device Reliability: OK 00:09:28.691 Read Only: No 00:09:28.691 Volatile Memory Backup: OK 00:09:28.691 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.692 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.692 Available Spare: 0% 00:09:28.692 Available Spare Threshold: 0% 00:09:28.692 Life Percentage Used: 0% 00:09:28.692 Data Units Read: 1804 00:09:28.692 Data Units Written: 834 00:09:28.692 Host Read Commands: 89709 00:09:28.692 Host Write Commands: 44665 00:09:28.692 Controller Busy Time: 0 minutes 00:09:28.692 Power Cycles: 0 00:09:28.692 Power On Hours: 0 hours 00:09:28.692 Unsafe Shutdowns: 0 00:09:28.692 Unrecoverable Media Errors: 0 00:09:28.692 Lifetime Error Log Entries: 0 00:09:28.692 Warning Temperature Time: 0 minutes 00:09:28.692 Critical Temperature Time: 0 minutes 00:09:28.692 00:09:28.692 Number of Queues 00:09:28.692 ================ 00:09:28.692 Number of I/O Submission Queues: 64 00:09:28.692 Number of I/O Completion Queues: 64 00:09:28.692 00:09:28.692 ZNS Specific Controller Data 00:09:28.692 ============================ 00:09:28.692 Zone Append Size Limit: 0 00:09:28.692 00:09:28.692 00:09:28.692 Active Namespaces 00:09:28.692 ================= 00:09:28.692 Namespace ID:1 00:09:28.692 Error Recovery Timeout: Unlimited 00:09:28.692 Command Set Identifier: NVM (00h) 00:09:28.692 Deallocate: Supported 00:09:28.692 Deallocated/Unwritten Error: Supported 00:09:28.692 Deallocated Read Value: All 0x00 00:09:28.692 Deallocate in Write Zeroes: Not Supported 00:09:28.692 Deallocated Guard Field: 0xFFFF 00:09:28.692 Flush: Supported 00:09:28.692 Reservation: Not Supported 00:09:28.692 Metadata Transferred as: Separate Metadata Buffer 00:09:28.692 Namespace Sharing Capabilities: Private 00:09:28.692 Size (in LBAs): 1548666 (5GiB) 00:09:28.692 Capacity (in LBAs): 1548666 (5GiB) 00:09:28.692 Utilization (in LBAs): 1548666 (5GiB) 00:09:28.692 Thin Provisioning: Not Supported 00:09:28.692 Per-NS Atomic Units: No 00:09:28.692 Maximum Single Source Range Length: 128 00:09:28.692 Maximum Copy Length: 128 00:09:28.692 Maximum Source Range Count: 128 00:09:28.692 NGUID/EUI64 Never Reused: No 00:09:28.692 Namespace Write Protected: No 00:09:28.692 Number of LBA Formats: 8 00:09:28.692 Current LBA Format: LBA Format #07 00:09:28.692 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.692 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.692 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.692 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.692 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.692 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.692 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.692 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.692 00:09:28.692 23:42:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:28.692 23:42:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:28.951 ===================================================== 00:09:28.951 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:28.951 ===================================================== 00:09:28.951 Controller Capabilities/Features 00:09:28.951 ================================ 00:09:28.951 Vendor ID: 1b36 00:09:28.951 Subsystem Vendor ID: 1af4 00:09:28.951 Serial Number: 12341 00:09:28.951 Model Number: QEMU NVMe Ctrl 00:09:28.951 Firmware Version: 8.0.0 00:09:28.951 Recommended Arb Burst: 6 00:09:28.951 IEEE OUI Identifier: 00 54 52 00:09:28.951 Multi-path I/O 00:09:28.951 May have multiple subsystem ports: No 00:09:28.951 May have multiple controllers: No 00:09:28.951 Associated with SR-IOV VF: No 00:09:28.951 Max Data Transfer Size: 524288 00:09:28.951 Max Number of Namespaces: 256 00:09:28.951 Max Number of I/O Queues: 64 00:09:28.951 NVMe Specification Version (VS): 1.4 00:09:28.951 NVMe Specification Version (Identify): 1.4 00:09:28.951 Maximum Queue Entries: 2048 00:09:28.951 Contiguous Queues Required: Yes 00:09:28.951 Arbitration Mechanisms Supported 00:09:28.951 Weighted Round Robin: Not Supported 00:09:28.951 Vendor Specific: Not Supported 00:09:28.951 Reset Timeout: 7500 ms 00:09:28.951 Doorbell Stride: 4 bytes 00:09:28.951 NVM Subsystem Reset: Not Supported 00:09:28.951 Command Sets Supported 00:09:28.951 NVM Command Set: Supported 00:09:28.951 Boot Partition: Not Supported 00:09:28.951 Memory Page Size Minimum: 4096 bytes 00:09:28.951 Memory Page Size Maximum: 65536 bytes 00:09:28.951 Persistent Memory Region: Not Supported 00:09:28.951 Optional Asynchronous Events Supported 00:09:28.951 Namespace Attribute Notices: Supported 00:09:28.951 Firmware Activation Notices: Not Supported 00:09:28.951 ANA Change Notices: Not Supported 00:09:28.951 PLE Aggregate Log Change Notices: Not Supported 00:09:28.951 LBA Status Info Alert Notices: Not Supported 00:09:28.951 EGE Aggregate Log Change Notices: Not Supported 00:09:28.951 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.951 Zone Descriptor Change Notices: Not Supported 00:09:28.951 Discovery Log Change Notices: Not Supported 00:09:28.951 Controller Attributes 00:09:28.951 128-bit Host Identifier: Not Supported 00:09:28.951 Non-Operational Permissive Mode: Not Supported 00:09:28.951 NVM Sets: Not Supported 00:09:28.951 Read Recovery Levels: Not Supported 00:09:28.951 Endurance Groups: Not Supported 00:09:28.951 Predictable Latency Mode: Not Supported 00:09:28.951 Traffic Based Keep ALive: Not Supported 00:09:28.951 Namespace Granularity: Not Supported 00:09:28.951 SQ Associations: Not Supported 00:09:28.951 UUID List: Not Supported 00:09:28.951 Multi-Domain Subsystem: Not Supported 00:09:28.951 Fixed Capacity Management: Not Supported 00:09:28.951 Variable Capacity Management: Not Supported 00:09:28.951 Delete Endurance Group: Not Supported 00:09:28.951 Delete NVM Set: Not Supported 00:09:28.951 Extended LBA Formats Supported: Supported 00:09:28.951 Flexible Data Placement Supported: Not Supported 00:09:28.951 00:09:28.951 Controller Memory Buffer Support 00:09:28.951 ================================ 00:09:28.951 Supported: No 00:09:28.951 00:09:28.951 Persistent Memory Region Support 00:09:28.951 ================================ 00:09:28.951 Supported: No 00:09:28.951 00:09:28.951 Admin Command Set Attributes 00:09:28.951 ============================ 00:09:28.951 Security Send/Receive: Not Supported 00:09:28.951 Format NVM: Supported 00:09:28.951 Firmware Activate/Download: Not Supported 00:09:28.951 Namespace Management: Supported 00:09:28.951 Device Self-Test: Not Supported 00:09:28.951 Directives: Supported 00:09:28.951 NVMe-MI: Not Supported 00:09:28.951 Virtualization Management: Not Supported 00:09:28.951 Doorbell Buffer Config: Supported 00:09:28.951 Get LBA Status Capability: Not Supported 00:09:28.951 Command & Feature Lockdown Capability: Not Supported 00:09:28.951 Abort Command Limit: 4 00:09:28.951 Async Event Request Limit: 4 00:09:28.951 Number of Firmware Slots: N/A 00:09:28.951 Firmware Slot 1 Read-Only: N/A 00:09:28.951 Firmware Activation Without Reset: N/A 00:09:28.951 Multiple Update Detection Support: N/A 00:09:28.951 Firmware Update Granularity: No Information Provided 00:09:28.952 Per-Namespace SMART Log: Yes 00:09:28.952 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.952 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:28.952 Command Effects Log Page: Supported 00:09:28.952 Get Log Page Extended Data: Supported 00:09:28.952 Telemetry Log Pages: Not Supported 00:09:28.952 Persistent Event Log Pages: Not Supported 00:09:28.952 Supported Log Pages Log Page: May Support 00:09:28.952 Commands Supported & Effects Log Page: Not Supported 00:09:28.952 Feature Identifiers & Effects Log Page:May Support 00:09:28.952 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.952 Data Area 4 for Telemetry Log: Not Supported 00:09:28.952 Error Log Page Entries Supported: 1 00:09:28.952 Keep Alive: Not Supported 00:09:28.952 00:09:28.952 NVM Command Set Attributes 00:09:28.952 ========================== 00:09:28.952 Submission Queue Entry Size 00:09:28.952 Max: 64 00:09:28.952 Min: 64 00:09:28.952 Completion Queue Entry Size 00:09:28.952 Max: 16 00:09:28.952 Min: 16 00:09:28.952 Number of Namespaces: 256 00:09:28.952 Compare Command: Supported 00:09:28.952 Write Uncorrectable Command: Not Supported 00:09:28.952 Dataset Management Command: Supported 00:09:28.952 Write Zeroes Command: Supported 00:09:28.952 Set Features Save Field: Supported 00:09:28.952 Reservations: Not Supported 00:09:28.952 Timestamp: Supported 00:09:28.952 Copy: Supported 00:09:28.952 Volatile Write Cache: Present 00:09:28.952 Atomic Write Unit (Normal): 1 00:09:28.952 Atomic Write Unit (PFail): 1 00:09:28.952 Atomic Compare & Write Unit: 1 00:09:28.952 Fused Compare & Write: Not Supported 00:09:28.952 Scatter-Gather List 00:09:28.952 SGL Command Set: Supported 00:09:28.952 SGL Keyed: Not Supported 00:09:28.952 SGL Bit Bucket Descriptor: Not Supported 00:09:28.952 SGL Metadata Pointer: Not Supported 00:09:28.952 Oversized SGL: Not Supported 00:09:28.952 SGL Metadata Address: Not Supported 00:09:28.952 SGL Offset: Not Supported 00:09:28.952 Transport SGL Data Block: Not Supported 00:09:28.952 Replay Protected Memory Block: Not Supported 00:09:28.952 00:09:28.952 Firmware Slot Information 00:09:28.952 ========================= 00:09:28.952 Active slot: 1 00:09:28.952 Slot 1 Firmware Revision: 1.0 00:09:28.952 00:09:28.952 00:09:28.952 Commands Supported and Effects 00:09:28.952 ============================== 00:09:28.952 Admin Commands 00:09:28.952 -------------- 00:09:28.952 Delete I/O Submission Queue (00h): Supported 00:09:28.952 Create I/O Submission Queue (01h): Supported 00:09:28.952 Get Log Page (02h): Supported 00:09:28.952 Delete I/O Completion Queue (04h): Supported 00:09:28.952 Create I/O Completion Queue (05h): Supported 00:09:28.952 Identify (06h): Supported 00:09:28.952 Abort (08h): Supported 00:09:28.952 Set Features (09h): Supported 00:09:28.952 Get Features (0Ah): Supported 00:09:28.952 Asynchronous Event Request (0Ch): Supported 00:09:28.952 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.952 Directive Send (19h): Supported 00:09:28.952 Directive Receive (1Ah): Supported 00:09:28.952 Virtualization Management (1Ch): Supported 00:09:28.952 Doorbell Buffer Config (7Ch): Supported 00:09:28.952 Format NVM (80h): Supported LBA-Change 00:09:28.952 I/O Commands 00:09:28.952 ------------ 00:09:28.952 Flush (00h): Supported LBA-Change 00:09:28.952 Write (01h): Supported LBA-Change 00:09:28.952 Read (02h): Supported 00:09:28.952 Compare (05h): Supported 00:09:28.952 Write Zeroes (08h): Supported LBA-Change 00:09:28.952 Dataset Management (09h): Supported LBA-Change 00:09:28.952 Unknown (0Ch): Supported 00:09:28.952 Unknown (12h): Supported 00:09:28.952 Copy (19h): Supported LBA-Change 00:09:28.952 Unknown (1Dh): Supported LBA-Change 00:09:28.952 00:09:28.952 Error Log 00:09:28.952 ========= 00:09:28.952 00:09:28.952 Arbitration 00:09:28.952 =========== 00:09:28.952 Arbitration Burst: no limit 00:09:28.952 00:09:28.952 Power Management 00:09:28.952 ================ 00:09:28.952 Number of Power States: 1 00:09:28.952 Current Power State: Power State #0 00:09:28.952 Power State #0: 00:09:28.952 Max Power: 25.00 W 00:09:28.952 Non-Operational State: Operational 00:09:28.952 Entry Latency: 16 microseconds 00:09:28.952 Exit Latency: 4 microseconds 00:09:28.952 Relative Read Throughput: 0 00:09:28.952 Relative Read Latency: 0 00:09:28.952 Relative Write Throughput: 0 00:09:28.952 Relative Write Latency: 0 00:09:28.952 Idle Power: Not Reported 00:09:28.952 Active Power: Not Reported 00:09:28.952 Non-Operational Permissive Mode: Not Supported 00:09:28.952 00:09:28.952 Health Information 00:09:28.952 ================== 00:09:28.952 Critical Warnings: 00:09:28.952 Available Spare Space: OK 00:09:28.952 Temperature: OK 00:09:28.952 Device Reliability: OK 00:09:28.952 Read Only: No 00:09:28.952 Volatile Memory Backup: OK 00:09:28.952 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.952 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.952 Available Spare: 0% 00:09:28.952 Available Spare Threshold: 0% 00:09:28.952 Life Percentage Used: 0% 00:09:28.952 Data Units Read: 1233 00:09:28.952 Data Units Written: 569 00:09:28.952 Host Read Commands: 60592 00:09:28.952 Host Write Commands: 29837 00:09:28.952 Controller Busy Time: 0 minutes 00:09:28.952 Power Cycles: 0 00:09:28.952 Power On Hours: 0 hours 00:09:28.952 Unsafe Shutdowns: 0 00:09:28.952 Unrecoverable Media Errors: 0 00:09:28.952 Lifetime Error Log Entries: 0 00:09:28.952 Warning Temperature Time: 0 minutes 00:09:28.952 Critical Temperature Time: 0 minutes 00:09:28.952 00:09:28.952 Number of Queues 00:09:28.952 ================ 00:09:28.952 Number of I/O Submission Queues: 64 00:09:28.952 Number of I/O Completion Queues: 64 00:09:28.952 00:09:28.952 ZNS Specific Controller Data 00:09:28.952 ============================ 00:09:28.952 Zone Append Size Limit: 0 00:09:28.952 00:09:28.952 00:09:28.952 Active Namespaces 00:09:28.952 ================= 00:09:28.952 Namespace ID:1 00:09:28.952 Error Recovery Timeout: Unlimited 00:09:28.952 Command Set Identifier: NVM (00h) 00:09:28.952 Deallocate: Supported 00:09:28.952 Deallocated/Unwritten Error: Supported 00:09:28.952 Deallocated Read Value: All 0x00 00:09:28.952 Deallocate in Write Zeroes: Not Supported 00:09:28.952 Deallocated Guard Field: 0xFFFF 00:09:28.952 Flush: Supported 00:09:28.952 Reservation: Not Supported 00:09:28.952 Namespace Sharing Capabilities: Private 00:09:28.952 Size (in LBAs): 1310720 (5GiB) 00:09:28.952 Capacity (in LBAs): 1310720 (5GiB) 00:09:28.952 Utilization (in LBAs): 1310720 (5GiB) 00:09:28.952 Thin Provisioning: Not Supported 00:09:28.952 Per-NS Atomic Units: No 00:09:28.952 Maximum Single Source Range Length: 128 00:09:28.952 Maximum Copy Length: 128 00:09:28.952 Maximum Source Range Count: 128 00:09:28.952 NGUID/EUI64 Never Reused: No 00:09:28.952 Namespace Write Protected: No 00:09:28.952 Number of LBA Formats: 8 00:09:28.952 Current LBA Format: LBA Format #04 00:09:28.952 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.952 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.952 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.952 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.952 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.952 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.953 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.953 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.953 00:09:28.953 23:42:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:28.953 23:42:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:29.213 ===================================================== 00:09:29.213 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:29.213 ===================================================== 00:09:29.213 Controller Capabilities/Features 00:09:29.213 ================================ 00:09:29.213 Vendor ID: 1b36 00:09:29.213 Subsystem Vendor ID: 1af4 00:09:29.213 Serial Number: 12342 00:09:29.213 Model Number: QEMU NVMe Ctrl 00:09:29.213 Firmware Version: 8.0.0 00:09:29.213 Recommended Arb Burst: 6 00:09:29.213 IEEE OUI Identifier: 00 54 52 00:09:29.213 Multi-path I/O 00:09:29.213 May have multiple subsystem ports: No 00:09:29.213 May have multiple controllers: No 00:09:29.213 Associated with SR-IOV VF: No 00:09:29.213 Max Data Transfer Size: 524288 00:09:29.213 Max Number of Namespaces: 256 00:09:29.213 Max Number of I/O Queues: 64 00:09:29.213 NVMe Specification Version (VS): 1.4 00:09:29.213 NVMe Specification Version (Identify): 1.4 00:09:29.213 Maximum Queue Entries: 2048 00:09:29.213 Contiguous Queues Required: Yes 00:09:29.213 Arbitration Mechanisms Supported 00:09:29.213 Weighted Round Robin: Not Supported 00:09:29.213 Vendor Specific: Not Supported 00:09:29.213 Reset Timeout: 7500 ms 00:09:29.213 Doorbell Stride: 4 bytes 00:09:29.213 NVM Subsystem Reset: Not Supported 00:09:29.213 Command Sets Supported 00:09:29.213 NVM Command Set: Supported 00:09:29.213 Boot Partition: Not Supported 00:09:29.213 Memory Page Size Minimum: 4096 bytes 00:09:29.213 Memory Page Size Maximum: 65536 bytes 00:09:29.213 Persistent Memory Region: Not Supported 00:09:29.213 Optional Asynchronous Events Supported 00:09:29.213 Namespace Attribute Notices: Supported 00:09:29.213 Firmware Activation Notices: Not Supported 00:09:29.213 ANA Change Notices: Not Supported 00:09:29.213 PLE Aggregate Log Change Notices: Not Supported 00:09:29.213 LBA Status Info Alert Notices: Not Supported 00:09:29.213 EGE Aggregate Log Change Notices: Not Supported 00:09:29.213 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.213 Zone Descriptor Change Notices: Not Supported 00:09:29.213 Discovery Log Change Notices: Not Supported 00:09:29.213 Controller Attributes 00:09:29.213 128-bit Host Identifier: Not Supported 00:09:29.213 Non-Operational Permissive Mode: Not Supported 00:09:29.213 NVM Sets: Not Supported 00:09:29.213 Read Recovery Levels: Not Supported 00:09:29.213 Endurance Groups: Not Supported 00:09:29.213 Predictable Latency Mode: Not Supported 00:09:29.213 Traffic Based Keep ALive: Not Supported 00:09:29.213 Namespace Granularity: Not Supported 00:09:29.213 SQ Associations: Not Supported 00:09:29.213 UUID List: Not Supported 00:09:29.213 Multi-Domain Subsystem: Not Supported 00:09:29.213 Fixed Capacity Management: Not Supported 00:09:29.213 Variable Capacity Management: Not Supported 00:09:29.213 Delete Endurance Group: Not Supported 00:09:29.213 Delete NVM Set: Not Supported 00:09:29.213 Extended LBA Formats Supported: Supported 00:09:29.213 Flexible Data Placement Supported: Not Supported 00:09:29.213 00:09:29.213 Controller Memory Buffer Support 00:09:29.213 ================================ 00:09:29.213 Supported: No 00:09:29.213 00:09:29.213 Persistent Memory Region Support 00:09:29.213 ================================ 00:09:29.213 Supported: No 00:09:29.213 00:09:29.213 Admin Command Set Attributes 00:09:29.213 ============================ 00:09:29.214 Security Send/Receive: Not Supported 00:09:29.214 Format NVM: Supported 00:09:29.214 Firmware Activate/Download: Not Supported 00:09:29.214 Namespace Management: Supported 00:09:29.214 Device Self-Test: Not Supported 00:09:29.214 Directives: Supported 00:09:29.214 NVMe-MI: Not Supported 00:09:29.214 Virtualization Management: Not Supported 00:09:29.214 Doorbell Buffer Config: Supported 00:09:29.214 Get LBA Status Capability: Not Supported 00:09:29.214 Command & Feature Lockdown Capability: Not Supported 00:09:29.214 Abort Command Limit: 4 00:09:29.214 Async Event Request Limit: 4 00:09:29.214 Number of Firmware Slots: N/A 00:09:29.214 Firmware Slot 1 Read-Only: N/A 00:09:29.214 Firmware Activation Without Reset: N/A 00:09:29.214 Multiple Update Detection Support: N/A 00:09:29.214 Firmware Update Granularity: No Information Provided 00:09:29.214 Per-Namespace SMART Log: Yes 00:09:29.214 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.214 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:29.214 Command Effects Log Page: Supported 00:09:29.214 Get Log Page Extended Data: Supported 00:09:29.214 Telemetry Log Pages: Not Supported 00:09:29.214 Persistent Event Log Pages: Not Supported 00:09:29.214 Supported Log Pages Log Page: May Support 00:09:29.214 Commands Supported & Effects Log Page: Not Supported 00:09:29.214 Feature Identifiers & Effects Log Page:May Support 00:09:29.214 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.214 Data Area 4 for Telemetry Log: Not Supported 00:09:29.214 Error Log Page Entries Supported: 1 00:09:29.214 Keep Alive: Not Supported 00:09:29.214 00:09:29.214 NVM Command Set Attributes 00:09:29.214 ========================== 00:09:29.214 Submission Queue Entry Size 00:09:29.214 Max: 64 00:09:29.214 Min: 64 00:09:29.214 Completion Queue Entry Size 00:09:29.214 Max: 16 00:09:29.214 Min: 16 00:09:29.214 Number of Namespaces: 256 00:09:29.214 Compare Command: Supported 00:09:29.214 Write Uncorrectable Command: Not Supported 00:09:29.214 Dataset Management Command: Supported 00:09:29.214 Write Zeroes Command: Supported 00:09:29.214 Set Features Save Field: Supported 00:09:29.214 Reservations: Not Supported 00:09:29.214 Timestamp: Supported 00:09:29.214 Copy: Supported 00:09:29.214 Volatile Write Cache: Present 00:09:29.214 Atomic Write Unit (Normal): 1 00:09:29.214 Atomic Write Unit (PFail): 1 00:09:29.214 Atomic Compare & Write Unit: 1 00:09:29.214 Fused Compare & Write: Not Supported 00:09:29.214 Scatter-Gather List 00:09:29.214 SGL Command Set: Supported 00:09:29.214 SGL Keyed: Not Supported 00:09:29.214 SGL Bit Bucket Descriptor: Not Supported 00:09:29.214 SGL Metadata Pointer: Not Supported 00:09:29.214 Oversized SGL: Not Supported 00:09:29.214 SGL Metadata Address: Not Supported 00:09:29.214 SGL Offset: Not Supported 00:09:29.214 Transport SGL Data Block: Not Supported 00:09:29.214 Replay Protected Memory Block: Not Supported 00:09:29.214 00:09:29.214 Firmware Slot Information 00:09:29.214 ========================= 00:09:29.214 Active slot: 1 00:09:29.214 Slot 1 Firmware Revision: 1.0 00:09:29.214 00:09:29.214 00:09:29.214 Commands Supported and Effects 00:09:29.214 ============================== 00:09:29.214 Admin Commands 00:09:29.214 -------------- 00:09:29.214 Delete I/O Submission Queue (00h): Supported 00:09:29.214 Create I/O Submission Queue (01h): Supported 00:09:29.214 Get Log Page (02h): Supported 00:09:29.214 Delete I/O Completion Queue (04h): Supported 00:09:29.214 Create I/O Completion Queue (05h): Supported 00:09:29.214 Identify (06h): Supported 00:09:29.214 Abort (08h): Supported 00:09:29.214 Set Features (09h): Supported 00:09:29.214 Get Features (0Ah): Supported 00:09:29.214 Asynchronous Event Request (0Ch): Supported 00:09:29.214 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.214 Directive Send (19h): Supported 00:09:29.214 Directive Receive (1Ah): Supported 00:09:29.214 Virtualization Management (1Ch): Supported 00:09:29.214 Doorbell Buffer Config (7Ch): Supported 00:09:29.214 Format NVM (80h): Supported LBA-Change 00:09:29.214 I/O Commands 00:09:29.214 ------------ 00:09:29.214 Flush (00h): Supported LBA-Change 00:09:29.214 Write (01h): Supported LBA-Change 00:09:29.214 Read (02h): Supported 00:09:29.214 Compare (05h): Supported 00:09:29.214 Write Zeroes (08h): Supported LBA-Change 00:09:29.214 Dataset Management (09h): Supported LBA-Change 00:09:29.214 Unknown (0Ch): Supported 00:09:29.214 Unknown (12h): Supported 00:09:29.214 Copy (19h): Supported LBA-Change 00:09:29.214 Unknown (1Dh): Supported LBA-Change 00:09:29.214 00:09:29.214 Error Log 00:09:29.214 ========= 00:09:29.214 00:09:29.214 Arbitration 00:09:29.214 =========== 00:09:29.214 Arbitration Burst: no limit 00:09:29.214 00:09:29.214 Power Management 00:09:29.214 ================ 00:09:29.214 Number of Power States: 1 00:09:29.214 Current Power State: Power State #0 00:09:29.214 Power State #0: 00:09:29.214 Max Power: 25.00 W 00:09:29.214 Non-Operational State: Operational 00:09:29.214 Entry Latency: 16 microseconds 00:09:29.214 Exit Latency: 4 microseconds 00:09:29.214 Relative Read Throughput: 0 00:09:29.214 Relative Read Latency: 0 00:09:29.214 Relative Write Throughput: 0 00:09:29.214 Relative Write Latency: 0 00:09:29.214 Idle Power: Not Reported 00:09:29.214 Active Power: Not Reported 00:09:29.214 Non-Operational Permissive Mode: Not Supported 00:09:29.214 00:09:29.214 Health Information 00:09:29.214 ================== 00:09:29.214 Critical Warnings: 00:09:29.214 Available Spare Space: OK 00:09:29.214 Temperature: OK 00:09:29.214 Device Reliability: OK 00:09:29.214 Read Only: No 00:09:29.214 Volatile Memory Backup: OK 00:09:29.214 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.214 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.214 Available Spare: 0% 00:09:29.214 Available Spare Threshold: 0% 00:09:29.214 Life Percentage Used: 0% 00:09:29.214 Data Units Read: 3901 00:09:29.214 Data Units Written: 1801 00:09:29.214 Host Read Commands: 183763 00:09:29.214 Host Write Commands: 90336 00:09:29.214 Controller Busy Time: 0 minutes 00:09:29.214 Power Cycles: 0 00:09:29.214 Power On Hours: 0 hours 00:09:29.214 Unsafe Shutdowns: 0 00:09:29.214 Unrecoverable Media Errors: 0 00:09:29.214 Lifetime Error Log Entries: 0 00:09:29.214 Warning Temperature Time: 0 minutes 00:09:29.214 Critical Temperature Time: 0 minutes 00:09:29.214 00:09:29.214 Number of Queues 00:09:29.214 ================ 00:09:29.214 Number of I/O Submission Queues: 64 00:09:29.214 Number of I/O Completion Queues: 64 00:09:29.214 00:09:29.214 ZNS Specific Controller Data 00:09:29.214 ============================ 00:09:29.214 Zone Append Size Limit: 0 00:09:29.214 00:09:29.214 00:09:29.214 Active Namespaces 00:09:29.214 ================= 00:09:29.214 Namespace ID:1 00:09:29.214 Error Recovery Timeout: Unlimited 00:09:29.214 Command Set Identifier: NVM (00h) 00:09:29.214 Deallocate: Supported 00:09:29.214 Deallocated/Unwritten Error: Supported 00:09:29.214 Deallocated Read Value: All 0x00 00:09:29.214 Deallocate in Write Zeroes: Not Supported 00:09:29.214 Deallocated Guard Field: 0xFFFF 00:09:29.214 Flush: Supported 00:09:29.214 Reservation: Not Supported 00:09:29.215 Namespace Sharing Capabilities: Private 00:09:29.215 Size (in LBAs): 1048576 (4GiB) 00:09:29.215 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.215 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.215 Thin Provisioning: Not Supported 00:09:29.215 Per-NS Atomic Units: No 00:09:29.215 Maximum Single Source Range Length: 128 00:09:29.215 Maximum Copy Length: 128 00:09:29.215 Maximum Source Range Count: 128 00:09:29.215 NGUID/EUI64 Never Reused: No 00:09:29.215 Namespace Write Protected: No 00:09:29.215 Number of LBA Formats: 8 00:09:29.215 Current LBA Format: LBA Format #04 00:09:29.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.215 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.215 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.215 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.215 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.215 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.215 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.215 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.215 00:09:29.215 Namespace ID:2 00:09:29.215 Error Recovery Timeout: Unlimited 00:09:29.215 Command Set Identifier: NVM (00h) 00:09:29.215 Deallocate: Supported 00:09:29.215 Deallocated/Unwritten Error: Supported 00:09:29.215 Deallocated Read Value: All 0x00 00:09:29.215 Deallocate in Write Zeroes: Not Supported 00:09:29.215 Deallocated Guard Field: 0xFFFF 00:09:29.215 Flush: Supported 00:09:29.215 Reservation: Not Supported 00:09:29.215 Namespace Sharing Capabilities: Private 00:09:29.215 Size (in LBAs): 1048576 (4GiB) 00:09:29.215 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.215 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.215 Thin Provisioning: Not Supported 00:09:29.215 Per-NS Atomic Units: No 00:09:29.215 Maximum Single Source Range Length: 128 00:09:29.215 Maximum Copy Length: 128 00:09:29.215 Maximum Source Range Count: 128 00:09:29.215 NGUID/EUI64 Never Reused: No 00:09:29.215 Namespace Write Protected: No 00:09:29.215 Number of LBA Formats: 8 00:09:29.215 Current LBA Format: LBA Format #04 00:09:29.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.215 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.215 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.215 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.215 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.215 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.215 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.215 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.215 00:09:29.215 Namespace ID:3 00:09:29.215 Error Recovery Timeout: Unlimited 00:09:29.215 Command Set Identifier: NVM (00h) 00:09:29.215 Deallocate: Supported 00:09:29.215 Deallocated/Unwritten Error: Supported 00:09:29.215 Deallocated Read Value: All 0x00 00:09:29.215 Deallocate in Write Zeroes: Not Supported 00:09:29.215 Deallocated Guard Field: 0xFFFF 00:09:29.215 Flush: Supported 00:09:29.215 Reservation: Not Supported 00:09:29.215 Namespace Sharing Capabilities: Private 00:09:29.215 Size (in LBAs): 1048576 (4GiB) 00:09:29.215 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.215 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.215 Thin Provisioning: Not Supported 00:09:29.215 Per-NS Atomic Units: No 00:09:29.215 Maximum Single Source Range Length: 128 00:09:29.215 Maximum Copy Length: 128 00:09:29.215 Maximum Source Range Count: 128 00:09:29.215 NGUID/EUI64 Never Reused: No 00:09:29.215 Namespace Write Protected: No 00:09:29.215 Number of LBA Formats: 8 00:09:29.215 Current LBA Format: LBA Format #04 00:09:29.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.215 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.215 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.215 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.215 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.215 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.215 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.215 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.215 00:09:29.215 23:42:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.215 23:42:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:29.215 ===================================================== 00:09:29.215 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:29.215 ===================================================== 00:09:29.215 Controller Capabilities/Features 00:09:29.215 ================================ 00:09:29.215 Vendor ID: 1b36 00:09:29.215 Subsystem Vendor ID: 1af4 00:09:29.215 Serial Number: 12343 00:09:29.215 Model Number: QEMU NVMe Ctrl 00:09:29.215 Firmware Version: 8.0.0 00:09:29.215 Recommended Arb Burst: 6 00:09:29.215 IEEE OUI Identifier: 00 54 52 00:09:29.215 Multi-path I/O 00:09:29.215 May have multiple subsystem ports: No 00:09:29.215 May have multiple controllers: Yes 00:09:29.215 Associated with SR-IOV VF: No 00:09:29.215 Max Data Transfer Size: 524288 00:09:29.215 Max Number of Namespaces: 256 00:09:29.215 Max Number of I/O Queues: 64 00:09:29.215 NVMe Specification Version (VS): 1.4 00:09:29.215 NVMe Specification Version (Identify): 1.4 00:09:29.215 Maximum Queue Entries: 2048 00:09:29.215 Contiguous Queues Required: Yes 00:09:29.215 Arbitration Mechanisms Supported 00:09:29.215 Weighted Round Robin: Not Supported 00:09:29.215 Vendor Specific: Not Supported 00:09:29.215 Reset Timeout: 7500 ms 00:09:29.215 Doorbell Stride: 4 bytes 00:09:29.215 NVM Subsystem Reset: Not Supported 00:09:29.215 Command Sets Supported 00:09:29.215 NVM Command Set: Supported 00:09:29.215 Boot Partition: Not Supported 00:09:29.215 Memory Page Size Minimum: 4096 bytes 00:09:29.215 Memory Page Size Maximum: 65536 bytes 00:09:29.215 Persistent Memory Region: Not Supported 00:09:29.215 Optional Asynchronous Events Supported 00:09:29.215 Namespace Attribute Notices: Supported 00:09:29.215 Firmware Activation Notices: Not Supported 00:09:29.215 ANA Change Notices: Not Supported 00:09:29.215 PLE Aggregate Log Change Notices: Not Supported 00:09:29.215 LBA Status Info Alert Notices: Not Supported 00:09:29.215 EGE Aggregate Log Change Notices: Not Supported 00:09:29.215 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.215 Zone Descriptor Change Notices: Not Supported 00:09:29.215 Discovery Log Change Notices: Not Supported 00:09:29.215 Controller Attributes 00:09:29.215 128-bit Host Identifier: Not Supported 00:09:29.215 Non-Operational Permissive Mode: Not Supported 00:09:29.215 NVM Sets: Not Supported 00:09:29.215 Read Recovery Levels: Not Supported 00:09:29.215 Endurance Groups: Supported 00:09:29.215 Predictable Latency Mode: Not Supported 00:09:29.215 Traffic Based Keep ALive: Not Supported 00:09:29.215 Namespace Granularity: Not Supported 00:09:29.215 SQ Associations: Not Supported 00:09:29.215 UUID List: Not Supported 00:09:29.215 Multi-Domain Subsystem: Not Supported 00:09:29.215 Fixed Capacity Management: Not Supported 00:09:29.215 Variable Capacity Management: Not Supported 00:09:29.215 Delete Endurance Group: Not Supported 00:09:29.215 Delete NVM Set: Not Supported 00:09:29.215 Extended LBA Formats Supported: Supported 00:09:29.215 Flexible Data Placement Supported: Supported 00:09:29.215 00:09:29.215 Controller Memory Buffer Support 00:09:29.215 ================================ 00:09:29.215 Supported: No 00:09:29.215 00:09:29.215 Persistent Memory Region Support 00:09:29.215 ================================ 00:09:29.216 Supported: No 00:09:29.216 00:09:29.216 Admin Command Set Attributes 00:09:29.216 ============================ 00:09:29.216 Security Send/Receive: Not Supported 00:09:29.216 Format NVM: Supported 00:09:29.216 Firmware Activate/Download: Not Supported 00:09:29.216 Namespace Management: Supported 00:09:29.216 Device Self-Test: Not Supported 00:09:29.216 Directives: Supported 00:09:29.216 NVMe-MI: Not Supported 00:09:29.216 Virtualization Management: Not Supported 00:09:29.216 Doorbell Buffer Config: Supported 00:09:29.216 Get LBA Status Capability: Not Supported 00:09:29.216 Command & Feature Lockdown Capability: Not Supported 00:09:29.216 Abort Command Limit: 4 00:09:29.216 Async Event Request Limit: 4 00:09:29.216 Number of Firmware Slots: N/A 00:09:29.216 Firmware Slot 1 Read-Only: N/A 00:09:29.216 Firmware Activation Without Reset: N/A 00:09:29.216 Multiple Update Detection Support: N/A 00:09:29.216 Firmware Update Granularity: No Information Provided 00:09:29.216 Per-Namespace SMART Log: Yes 00:09:29.216 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.216 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:29.216 Command Effects Log Page: Supported 00:09:29.216 Get Log Page Extended Data: Supported 00:09:29.216 Telemetry Log Pages: Not Supported 00:09:29.216 Persistent Event Log Pages: Not Supported 00:09:29.216 Supported Log Pages Log Page: May Support 00:09:29.216 Commands Supported & Effects Log Page: Not Supported 00:09:29.216 Feature Identifiers & Effects Log Page:May Support 00:09:29.216 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.216 Data Area 4 for Telemetry Log: Not Supported 00:09:29.216 Error Log Page Entries Supported: 1 00:09:29.216 Keep Alive: Not Supported 00:09:29.216 00:09:29.216 NVM Command Set Attributes 00:09:29.216 ========================== 00:09:29.216 Submission Queue Entry Size 00:09:29.216 Max: 64 00:09:29.216 Min: 64 00:09:29.216 Completion Queue Entry Size 00:09:29.216 Max: 16 00:09:29.216 Min: 16 00:09:29.216 Number of Namespaces: 256 00:09:29.216 Compare Command: Supported 00:09:29.216 Write Uncorrectable Command: Not Supported 00:09:29.216 Dataset Management Command: Supported 00:09:29.216 Write Zeroes Command: Supported 00:09:29.216 Set Features Save Field: Supported 00:09:29.216 Reservations: Not Supported 00:09:29.216 Timestamp: Supported 00:09:29.216 Copy: Supported 00:09:29.216 Volatile Write Cache: Present 00:09:29.216 Atomic Write Unit (Normal): 1 00:09:29.216 Atomic Write Unit (PFail): 1 00:09:29.216 Atomic Compare & Write Unit: 1 00:09:29.216 Fused Compare & Write: Not Supported 00:09:29.216 Scatter-Gather List 00:09:29.216 SGL Command Set: Supported 00:09:29.216 SGL Keyed: Not Supported 00:09:29.216 SGL Bit Bucket Descriptor: Not Supported 00:09:29.216 SGL Metadata Pointer: Not Supported 00:09:29.216 Oversized SGL: Not Supported 00:09:29.216 SGL Metadata Address: Not Supported 00:09:29.216 SGL Offset: Not Supported 00:09:29.216 Transport SGL Data Block: Not Supported 00:09:29.216 Replay Protected Memory Block: Not Supported 00:09:29.216 00:09:29.216 Firmware Slot Information 00:09:29.216 ========================= 00:09:29.216 Active slot: 1 00:09:29.216 Slot 1 Firmware Revision: 1.0 00:09:29.216 00:09:29.216 00:09:29.216 Commands Supported and Effects 00:09:29.216 ============================== 00:09:29.216 Admin Commands 00:09:29.216 -------------- 00:09:29.216 Delete I/O Submission Queue (00h): Supported 00:09:29.216 Create I/O Submission Queue (01h): Supported 00:09:29.216 Get Log Page (02h): Supported 00:09:29.216 Delete I/O Completion Queue (04h): Supported 00:09:29.216 Create I/O Completion Queue (05h): Supported 00:09:29.216 Identify (06h): Supported 00:09:29.216 Abort (08h): Supported 00:09:29.216 Set Features (09h): Supported 00:09:29.216 Get Features (0Ah): Supported 00:09:29.216 Asynchronous Event Request (0Ch): Supported 00:09:29.216 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.216 Directive Send (19h): Supported 00:09:29.216 Directive Receive (1Ah): Supported 00:09:29.216 Virtualization Management (1Ch): Supported 00:09:29.216 Doorbell Buffer Config (7Ch): Supported 00:09:29.216 Format NVM (80h): Supported LBA-Change 00:09:29.216 I/O Commands 00:09:29.216 ------------ 00:09:29.216 Flush (00h): Supported LBA-Change 00:09:29.216 Write (01h): Supported LBA-Change 00:09:29.216 Read (02h): Supported 00:09:29.216 Compare (05h): Supported 00:09:29.216 Write Zeroes (08h): Supported LBA-Change 00:09:29.216 Dataset Management (09h): Supported LBA-Change 00:09:29.216 Unknown (0Ch): Supported 00:09:29.216 Unknown (12h): Supported 00:09:29.216 Copy (19h): Supported LBA-Change 00:09:29.216 Unknown (1Dh): Supported LBA-Change 00:09:29.216 00:09:29.216 Error Log 00:09:29.216 ========= 00:09:29.216 00:09:29.216 Arbitration 00:09:29.216 =========== 00:09:29.216 Arbitration Burst: no limit 00:09:29.216 00:09:29.216 Power Management 00:09:29.216 ================ 00:09:29.216 Number of Power States: 1 00:09:29.216 Current Power State: Power State #0 00:09:29.216 Power State #0: 00:09:29.216 Max Power: 25.00 W 00:09:29.216 Non-Operational State: Operational 00:09:29.216 Entry Latency: 16 microseconds 00:09:29.216 Exit Latency: 4 microseconds 00:09:29.216 Relative Read Throughput: 0 00:09:29.216 Relative Read Latency: 0 00:09:29.216 Relative Write Throughput: 0 00:09:29.216 Relative Write Latency: 0 00:09:29.216 Idle Power: Not Reported 00:09:29.216 Active Power: Not Reported 00:09:29.216 Non-Operational Permissive Mode: Not Supported 00:09:29.216 00:09:29.216 Health Information 00:09:29.216 ================== 00:09:29.216 Critical Warnings: 00:09:29.216 Available Spare Space: OK 00:09:29.216 Temperature: OK 00:09:29.216 Device Reliability: OK 00:09:29.216 Read Only: No 00:09:29.216 Volatile Memory Backup: OK 00:09:29.216 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.216 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.216 Available Spare: 0% 00:09:29.216 Available Spare Threshold: 0% 00:09:29.216 Life Percentage Used: 0% 00:09:29.216 Data Units Read: 1467 00:09:29.216 Data Units Written: 685 00:09:29.216 Host Read Commands: 62490 00:09:29.216 Host Write Commands: 30779 00:09:29.216 Controller Busy Time: 0 minutes 00:09:29.216 Power Cycles: 0 00:09:29.216 Power On Hours: 0 hours 00:09:29.216 Unsafe Shutdowns: 0 00:09:29.216 Unrecoverable Media Errors: 0 00:09:29.216 Lifetime Error Log Entries: 0 00:09:29.216 Warning Temperature Time: 0 minutes 00:09:29.216 Critical Temperature Time: 0 minutes 00:09:29.216 00:09:29.216 Number of Queues 00:09:29.216 ================ 00:09:29.216 Number of I/O Submission Queues: 64 00:09:29.216 Number of I/O Completion Queues: 64 00:09:29.216 00:09:29.216 ZNS Specific Controller Data 00:09:29.216 ============================ 00:09:29.216 Zone Append Size Limit: 0 00:09:29.216 00:09:29.216 00:09:29.216 Active Namespaces 00:09:29.216 ================= 00:09:29.216 Namespace ID:1 00:09:29.216 Error Recovery Timeout: Unlimited 00:09:29.216 Command Set Identifier: NVM (00h) 00:09:29.216 Deallocate: Supported 00:09:29.216 Deallocated/Unwritten Error: Supported 00:09:29.216 Deallocated Read Value: All 0x00 00:09:29.216 Deallocate in Write Zeroes: Not Supported 00:09:29.217 Deallocated Guard Field: 0xFFFF 00:09:29.217 Flush: Supported 00:09:29.217 Reservation: Not Supported 00:09:29.217 Namespace Sharing Capabilities: Multiple Controllers 00:09:29.217 Size (in LBAs): 262144 (1GiB) 00:09:29.217 Capacity (in LBAs): 262144 (1GiB) 00:09:29.217 Utilization (in LBAs): 262144 (1GiB) 00:09:29.217 Thin Provisioning: Not Supported 00:09:29.217 Per-NS Atomic Units: No 00:09:29.217 Maximum Single Source Range Length: 128 00:09:29.217 Maximum Copy Length: 128 00:09:29.217 Maximum Source Range Count: 128 00:09:29.217 NGUID/EUI64 Never Reused: No 00:09:29.217 Namespace Write Protected: No 00:09:29.217 Endurance group ID: 1 00:09:29.217 Number of LBA Formats: 8 00:09:29.217 Current LBA Format: LBA Format #04 00:09:29.217 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.217 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.217 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.217 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.217 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.217 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.217 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.217 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.217 00:09:29.217 Get Feature FDP: 00:09:29.217 ================ 00:09:29.217 Enabled: Yes 00:09:29.217 FDP configuration index: 0 00:09:29.217 00:09:29.217 FDP configurations log page 00:09:29.217 =========================== 00:09:29.217 Number of FDP configurations: 1 00:09:29.217 Version: 0 00:09:29.217 Size: 112 00:09:29.217 FDP Configuration Descriptor: 0 00:09:29.217 Descriptor Size: 96 00:09:29.217 Reclaim Group Identifier format: 2 00:09:29.217 FDP Volatile Write Cache: Not Present 00:09:29.217 FDP Configuration: Valid 00:09:29.217 Vendor Specific Size: 0 00:09:29.217 Number of Reclaim Groups: 2 00:09:29.217 Number of Recalim Unit Handles: 8 00:09:29.217 Max Placement Identifiers: 128 00:09:29.217 Number of Namespaces Suppprted: 256 00:09:29.217 Reclaim unit Nominal Size: 6000000 bytes 00:09:29.217 Estimated Reclaim Unit Time Limit: Not Reported 00:09:29.217 RUH Desc #000: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #001: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #002: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #003: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #004: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #005: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #006: RUH Type: Initially Isolated 00:09:29.217 RUH Desc #007: RUH Type: Initially Isolated 00:09:29.217 00:09:29.217 FDP reclaim unit handle usage log page 00:09:29.217 ====================================== 00:09:29.217 Number of Reclaim Unit Handles: 8 00:09:29.217 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:29.217 RUH Usage Desc #001: RUH Attributes: Unused 00:09:29.217 RUH Usage Desc #002: RUH Attributes: Unused 00:09:29.217 RUH Usage Desc #003: RUH Attributes: Unused 00:09:29.217 RUH Usage Desc #004: RUH Attributes: Unused 00:09:29.217 RUH Usage Desc #005: RUH Attributes: Unused 00:09:29.217 RUH Usage Desc #006: RUH Attributes: Unused 00:09:29.217 RUH Usage Desc #007: RUH Attributes: Unused 00:09:29.217 00:09:29.217 FDP statistics log page 00:09:29.217 ======================= 00:09:29.217 Host bytes with metadata written: 441798656 00:09:29.217 Media bytes with metadata written: 441909248 00:09:29.217 Media bytes erased: 0 00:09:29.217 00:09:29.217 FDP events log page 00:09:29.217 =================== 00:09:29.217 Number of FDP events: 0 00:09:29.217 00:09:29.476 00:09:29.476 real 0m1.112s 00:09:29.476 user 0m0.386s 00:09:29.476 sys 0m0.520s 00:09:29.476 23:42:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.476 23:42:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.476 ************************************ 00:09:29.476 END TEST nvme_identify 00:09:29.476 ************************************ 00:09:29.476 23:42:59 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:29.476 23:42:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:29.476 23:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.476 23:42:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.476 ************************************ 00:09:29.476 START TEST nvme_perf 00:09:29.476 ************************************ 00:09:29.476 23:42:59 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:29.476 23:42:59 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:30.853 Initializing NVMe Controllers 00:09:30.853 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:30.853 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:30.853 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:30.853 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:30.853 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:30.853 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:30.853 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:30.853 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:30.853 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:30.853 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:30.853 Initialization complete. Launching workers. 00:09:30.853 ======================================================== 00:09:30.853 Latency(us) 00:09:30.853 Device Information : IOPS MiB/s Average min max 00:09:30.853 PCIE (0000:00:06.0) NSID 1 from core 0: 19161.14 224.54 6677.03 4972.90 27707.40 00:09:30.853 PCIE (0000:00:07.0) NSID 1 from core 0: 19161.14 224.54 6672.80 5113.78 27271.59 00:09:30.853 PCIE (0000:00:09.0) NSID 1 from core 0: 19161.14 224.54 6666.80 5095.61 28104.49 00:09:30.853 PCIE (0000:00:08.0) NSID 1 from core 0: 19161.14 224.54 6660.85 5109.53 27554.93 00:09:30.853 PCIE (0000:00:08.0) NSID 2 from core 0: 19161.14 224.54 6654.92 5151.94 27067.90 00:09:30.853 PCIE (0000:00:08.0) NSID 3 from core 0: 19161.14 224.54 6649.45 5131.99 26405.99 00:09:30.853 ======================================================== 00:09:30.853 Total : 114966.87 1347.27 6663.64 4972.90 28104.49 00:09:30.853 00:09:30.853 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:30.853 ================================================================================= 00:09:30.853 1.00000% : 5116.849us 00:09:30.853 10.00000% : 5444.529us 00:09:30.853 25.00000% : 5797.415us 00:09:30.853 50.00000% : 6377.157us 00:09:30.853 75.00000% : 6956.898us 00:09:30.853 90.00000% : 7813.908us 00:09:30.853 95.00000% : 9225.452us 00:09:30.853 98.00000% : 10788.234us 00:09:30.853 99.00000% : 12603.077us 00:09:30.853 99.50000% : 25004.505us 00:09:30.853 99.90000% : 27222.646us 00:09:30.853 99.99000% : 27827.594us 00:09:30.853 99.99900% : 27827.594us 00:09:30.853 99.99990% : 27827.594us 00:09:30.853 99.99999% : 27827.594us 00:09:30.853 00:09:30.853 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:30.853 ================================================================================= 00:09:30.853 1.00000% : 5242.880us 00:09:30.853 10.00000% : 5545.354us 00:09:30.853 25.00000% : 5873.034us 00:09:30.853 50.00000% : 6351.951us 00:09:30.853 75.00000% : 6856.074us 00:09:30.853 90.00000% : 7813.908us 00:09:30.853 95.00000% : 9124.628us 00:09:30.853 98.00000% : 11090.708us 00:09:30.853 99.00000% : 13208.025us 00:09:30.853 99.50000% : 24802.855us 00:09:30.853 99.90000% : 26819.348us 00:09:30.853 99.99000% : 27424.295us 00:09:30.853 99.99900% : 27424.295us 00:09:30.853 99.99990% : 27424.295us 00:09:30.853 99.99999% : 27424.295us 00:09:30.853 00:09:30.853 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:30.853 ================================================================================= 00:09:30.853 1.00000% : 5268.086us 00:09:30.853 10.00000% : 5545.354us 00:09:30.853 25.00000% : 5873.034us 00:09:30.853 50.00000% : 6351.951us 00:09:30.853 75.00000% : 6856.074us 00:09:30.853 90.00000% : 7561.846us 00:09:30.853 95.00000% : 9124.628us 00:09:30.853 98.00000% : 10989.883us 00:09:30.853 99.00000% : 13812.972us 00:09:30.853 99.50000% : 25508.628us 00:09:30.853 99.90000% : 27625.945us 00:09:30.853 99.99000% : 28230.892us 00:09:30.853 99.99900% : 28230.892us 00:09:30.853 99.99990% : 28230.892us 00:09:30.853 99.99999% : 28230.892us 00:09:30.853 00:09:30.853 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:30.853 ================================================================================= 00:09:30.853 1.00000% : 5268.086us 00:09:30.853 10.00000% : 5570.560us 00:09:30.853 25.00000% : 5873.034us 00:09:30.853 50.00000% : 6351.951us 00:09:30.853 75.00000% : 6856.074us 00:09:30.853 90.00000% : 7612.258us 00:09:30.853 95.00000% : 9023.803us 00:09:30.853 98.00000% : 10737.822us 00:09:30.853 99.00000% : 12905.551us 00:09:30.853 99.50000% : 24903.680us 00:09:30.853 99.90000% : 27222.646us 00:09:30.853 99.99000% : 27625.945us 00:09:30.853 99.99900% : 27625.945us 00:09:30.853 99.99990% : 27625.945us 00:09:30.853 99.99999% : 27625.945us 00:09:30.853 00:09:30.853 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:30.853 ================================================================================= 00:09:30.853 1.00000% : 5268.086us 00:09:30.853 10.00000% : 5570.560us 00:09:30.853 25.00000% : 5873.034us 00:09:30.853 50.00000% : 6351.951us 00:09:30.853 75.00000% : 6856.074us 00:09:30.853 90.00000% : 7662.671us 00:09:30.853 95.00000% : 9124.628us 00:09:30.853 98.00000% : 10485.760us 00:09:30.853 99.00000% : 11645.243us 00:09:30.853 99.50000% : 24399.557us 00:09:30.853 99.90000% : 26617.698us 00:09:30.853 99.99000% : 27222.646us 00:09:30.853 99.99900% : 27222.646us 00:09:30.853 99.99990% : 27222.646us 00:09:30.853 99.99999% : 27222.646us 00:09:30.853 00:09:30.853 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:30.853 ================================================================================= 00:09:30.853 1.00000% : 5268.086us 00:09:30.853 10.00000% : 5570.560us 00:09:30.853 25.00000% : 5873.034us 00:09:30.853 50.00000% : 6377.157us 00:09:30.853 75.00000% : 6856.074us 00:09:30.853 90.00000% : 7713.083us 00:09:30.853 95.00000% : 9124.628us 00:09:30.853 98.00000% : 10183.286us 00:09:30.853 99.00000% : 11040.295us 00:09:30.853 99.50000% : 23895.434us 00:09:30.853 99.90000% : 26012.751us 00:09:30.853 99.99000% : 26416.049us 00:09:30.853 99.99900% : 26416.049us 00:09:30.853 99.99990% : 26416.049us 00:09:30.853 99.99999% : 26416.049us 00:09:30.853 00:09:30.853 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:30.853 ============================================================================== 00:09:30.853 Range in us Cumulative IO count 00:09:30.853 4965.612 - 4990.818: 0.0155% ( 3) 00:09:30.853 4990.818 - 5016.025: 0.0673% ( 10) 00:09:30.853 5016.025 - 5041.231: 0.2121% ( 28) 00:09:30.853 5041.231 - 5066.437: 0.5226% ( 60) 00:09:30.853 5066.437 - 5091.643: 0.9209% ( 77) 00:09:30.853 5091.643 - 5116.849: 1.4021% ( 93) 00:09:30.853 5116.849 - 5142.055: 1.9247% ( 101) 00:09:30.853 5142.055 - 5167.262: 2.4472% ( 101) 00:09:30.853 5167.262 - 5192.468: 3.0060% ( 108) 00:09:30.853 5192.468 - 5217.674: 3.5493% ( 105) 00:09:30.853 5217.674 - 5242.880: 4.2270% ( 131) 00:09:30.853 5242.880 - 5268.086: 4.8479% ( 120) 00:09:30.854 5268.086 - 5293.292: 5.4015% ( 107) 00:09:30.854 5293.292 - 5318.498: 6.1620% ( 147) 00:09:30.854 5318.498 - 5343.705: 6.9226% ( 147) 00:09:30.854 5343.705 - 5368.911: 7.7142% ( 153) 00:09:30.854 5368.911 - 5394.117: 8.5938% ( 170) 00:09:30.854 5394.117 - 5419.323: 9.4423% ( 164) 00:09:30.854 5419.323 - 5444.529: 10.3942% ( 184) 00:09:30.854 5444.529 - 5469.735: 11.3307% ( 181) 00:09:30.854 5469.735 - 5494.942: 12.3810% ( 203) 00:09:30.854 5494.942 - 5520.148: 13.3847% ( 194) 00:09:30.854 5520.148 - 5545.354: 14.3574% ( 188) 00:09:30.854 5545.354 - 5570.560: 15.4801% ( 217) 00:09:30.854 5570.560 - 5595.766: 16.4735% ( 192) 00:09:30.854 5595.766 - 5620.972: 17.5548% ( 209) 00:09:30.854 5620.972 - 5646.178: 18.6569% ( 213) 00:09:30.854 5646.178 - 5671.385: 19.6865% ( 199) 00:09:30.854 5671.385 - 5696.591: 20.8299% ( 221) 00:09:30.854 5696.591 - 5721.797: 21.8957% ( 206) 00:09:30.854 5721.797 - 5747.003: 22.9201% ( 198) 00:09:30.854 5747.003 - 5772.209: 24.0221% ( 213) 00:09:30.854 5772.209 - 5797.415: 25.1552% ( 219) 00:09:30.854 5797.415 - 5822.622: 26.1848% ( 199) 00:09:30.854 5822.622 - 5847.828: 27.3541% ( 226) 00:09:30.854 5847.828 - 5873.034: 28.3992% ( 202) 00:09:30.854 5873.034 - 5898.240: 29.5219% ( 217) 00:09:30.854 5898.240 - 5923.446: 30.5877% ( 206) 00:09:30.854 5923.446 - 5948.652: 31.7001% ( 215) 00:09:30.854 5948.652 - 5973.858: 32.7608% ( 205) 00:09:30.854 5973.858 - 5999.065: 33.9249% ( 225) 00:09:30.854 5999.065 - 6024.271: 34.9700% ( 202) 00:09:30.854 6024.271 - 6049.477: 36.0720% ( 213) 00:09:30.854 6049.477 - 6074.683: 37.2465% ( 227) 00:09:30.854 6074.683 - 6099.889: 38.2502% ( 194) 00:09:30.854 6099.889 - 6125.095: 39.4143% ( 225) 00:09:30.854 6125.095 - 6150.302: 40.5526% ( 220) 00:09:30.854 6150.302 - 6175.508: 41.6080% ( 204) 00:09:30.854 6175.508 - 6200.714: 42.6945% ( 210) 00:09:30.854 6200.714 - 6225.920: 43.8431% ( 222) 00:09:30.854 6225.920 - 6251.126: 44.8831% ( 201) 00:09:30.854 6251.126 - 6276.332: 46.0524% ( 226) 00:09:30.854 6276.332 - 6301.538: 47.1130% ( 205) 00:09:30.854 6301.538 - 6326.745: 48.2305% ( 216) 00:09:30.854 6326.745 - 6351.951: 49.3688% ( 220) 00:09:30.854 6351.951 - 6377.157: 50.4398% ( 207) 00:09:30.854 6377.157 - 6402.363: 51.5935% ( 223) 00:09:30.854 6402.363 - 6427.569: 52.6283% ( 200) 00:09:30.854 6427.569 - 6452.775: 53.7148% ( 210) 00:09:30.854 6452.775 - 6503.188: 55.9551% ( 433) 00:09:30.854 6503.188 - 6553.600: 58.2419% ( 442) 00:09:30.854 6553.600 - 6604.012: 60.5184% ( 440) 00:09:30.854 6604.012 - 6654.425: 62.7276% ( 427) 00:09:30.854 6654.425 - 6704.837: 64.9627% ( 432) 00:09:30.854 6704.837 - 6755.249: 67.2185% ( 436) 00:09:30.854 6755.249 - 6805.662: 69.4692% ( 435) 00:09:30.854 6805.662 - 6856.074: 71.6629% ( 424) 00:09:30.854 6856.074 - 6906.486: 73.9808% ( 448) 00:09:30.854 6906.486 - 6956.898: 76.1900% ( 427) 00:09:30.854 6956.898 - 7007.311: 78.3682% ( 421) 00:09:30.854 7007.311 - 7057.723: 80.4377% ( 400) 00:09:30.854 7057.723 - 7108.135: 81.9847% ( 299) 00:09:30.854 7108.135 - 7158.548: 83.3144% ( 257) 00:09:30.854 7158.548 - 7208.960: 84.5716% ( 243) 00:09:30.854 7208.960 - 7259.372: 85.7202% ( 222) 00:09:30.854 7259.372 - 7309.785: 86.6981% ( 189) 00:09:30.854 7309.785 - 7360.197: 87.4845% ( 152) 00:09:30.854 7360.197 - 7410.609: 88.0226% ( 104) 00:09:30.854 7410.609 - 7461.022: 88.4727% ( 87) 00:09:30.854 7461.022 - 7511.434: 88.7676% ( 57) 00:09:30.854 7511.434 - 7561.846: 89.0004% ( 45) 00:09:30.854 7561.846 - 7612.258: 89.2229% ( 43) 00:09:30.854 7612.258 - 7662.671: 89.4298% ( 40) 00:09:30.854 7662.671 - 7713.083: 89.6264% ( 38) 00:09:30.854 7713.083 - 7763.495: 89.8179% ( 37) 00:09:30.854 7763.495 - 7813.908: 90.0352% ( 42) 00:09:30.854 7813.908 - 7864.320: 90.2421% ( 40) 00:09:30.854 7864.320 - 7914.732: 90.4594% ( 42) 00:09:30.854 7914.732 - 7965.145: 90.6612% ( 39) 00:09:30.854 7965.145 - 8015.557: 90.8630% ( 39) 00:09:30.854 8015.557 - 8065.969: 91.0700% ( 40) 00:09:30.854 8065.969 - 8116.382: 91.2717% ( 39) 00:09:30.854 8116.382 - 8166.794: 91.4528% ( 35) 00:09:30.854 8166.794 - 8217.206: 91.6236% ( 33) 00:09:30.854 8217.206 - 8267.618: 91.7891% ( 32) 00:09:30.854 8267.618 - 8318.031: 91.9805% ( 37) 00:09:30.854 8318.031 - 8368.443: 92.1461% ( 32) 00:09:30.854 8368.443 - 8418.855: 92.3220% ( 34) 00:09:30.854 8418.855 - 8469.268: 92.5083% ( 36) 00:09:30.854 8469.268 - 8519.680: 92.6842% ( 34) 00:09:30.854 8519.680 - 8570.092: 92.8704% ( 36) 00:09:30.854 8570.092 - 8620.505: 92.9998% ( 25) 00:09:30.854 8620.505 - 8670.917: 93.1395% ( 27) 00:09:30.854 8670.917 - 8721.329: 93.3309% ( 37) 00:09:30.854 8721.329 - 8771.742: 93.5327% ( 39) 00:09:30.854 8771.742 - 8822.154: 93.7034% ( 33) 00:09:30.854 8822.154 - 8872.566: 93.8742% ( 33) 00:09:30.854 8872.566 - 8922.978: 94.0656% ( 37) 00:09:30.854 8922.978 - 8973.391: 94.2363% ( 33) 00:09:30.854 8973.391 - 9023.803: 94.4123% ( 34) 00:09:30.854 9023.803 - 9074.215: 94.6089% ( 38) 00:09:30.854 9074.215 - 9124.628: 94.7641% ( 30) 00:09:30.854 9124.628 - 9175.040: 94.9296% ( 32) 00:09:30.854 9175.040 - 9225.452: 95.0952% ( 32) 00:09:30.854 9225.452 - 9275.865: 95.2711% ( 34) 00:09:30.854 9275.865 - 9326.277: 95.4315% ( 31) 00:09:30.854 9326.277 - 9376.689: 95.5971% ( 32) 00:09:30.854 9376.689 - 9427.102: 95.7730% ( 34) 00:09:30.854 9427.102 - 9477.514: 95.9385% ( 32) 00:09:30.854 9477.514 - 9527.926: 96.0782% ( 27) 00:09:30.854 9527.926 - 9578.338: 96.2231% ( 28) 00:09:30.854 9578.338 - 9628.751: 96.3421% ( 23) 00:09:30.854 9628.751 - 9679.163: 96.4249% ( 16) 00:09:30.854 9679.163 - 9729.575: 96.5128% ( 17) 00:09:30.854 9729.575 - 9779.988: 96.6111% ( 19) 00:09:30.854 9779.988 - 9830.400: 96.7043% ( 18) 00:09:30.854 9830.400 - 9880.812: 96.8026% ( 19) 00:09:30.854 9880.812 - 9931.225: 96.8957% ( 18) 00:09:30.854 9931.225 - 9981.637: 96.9992% ( 20) 00:09:30.854 9981.637 - 10032.049: 97.0923% ( 18) 00:09:30.854 10032.049 - 10082.462: 97.1596% ( 13) 00:09:30.854 10082.462 - 10132.874: 97.2165% ( 11) 00:09:30.854 10132.874 - 10183.286: 97.2941% ( 15) 00:09:30.854 10183.286 - 10233.698: 97.3613% ( 13) 00:09:30.854 10233.698 - 10284.111: 97.4131% ( 10) 00:09:30.854 10284.111 - 10334.523: 97.4803% ( 13) 00:09:30.854 10334.523 - 10384.935: 97.5373% ( 11) 00:09:30.854 10384.935 - 10435.348: 97.5942% ( 11) 00:09:30.854 10435.348 - 10485.760: 97.6562% ( 12) 00:09:30.854 10485.760 - 10536.172: 97.7235% ( 13) 00:09:30.854 10536.172 - 10586.585: 97.7908% ( 13) 00:09:30.854 10586.585 - 10636.997: 97.8580% ( 13) 00:09:30.854 10636.997 - 10687.409: 97.9356% ( 15) 00:09:30.854 10687.409 - 10737.822: 97.9822% ( 9) 00:09:30.854 10737.822 - 10788.234: 98.0598% ( 15) 00:09:30.854 10788.234 - 10838.646: 98.1064% ( 9) 00:09:30.854 10838.646 - 10889.058: 98.1426% ( 7) 00:09:30.854 10889.058 - 10939.471: 98.1840% ( 8) 00:09:30.854 10939.471 - 10989.883: 98.2202% ( 7) 00:09:30.854 10989.883 - 11040.295: 98.2409% ( 4) 00:09:30.854 11040.295 - 11090.708: 98.2823% ( 8) 00:09:30.854 11090.708 - 11141.120: 98.3185% ( 7) 00:09:30.854 11141.120 - 11191.532: 98.3495% ( 6) 00:09:30.854 11191.532 - 11241.945: 98.3806% ( 6) 00:09:30.854 11241.945 - 11292.357: 98.4116% ( 6) 00:09:30.854 11292.357 - 11342.769: 98.4375% ( 5) 00:09:30.854 11342.769 - 11393.182: 98.4685% ( 6) 00:09:30.854 11393.182 - 11443.594: 98.5099% ( 8) 00:09:30.854 11443.594 - 11494.006: 98.5358% ( 5) 00:09:30.854 11494.006 - 11544.418: 98.5668% ( 6) 00:09:30.854 11544.418 - 11594.831: 98.5927% ( 5) 00:09:30.854 11594.831 - 11645.243: 98.6082% ( 3) 00:09:30.854 11645.243 - 11695.655: 98.6289% ( 4) 00:09:30.854 11695.655 - 11746.068: 98.6548% ( 5) 00:09:30.854 11746.068 - 11796.480: 98.6703% ( 3) 00:09:30.854 11796.480 - 11846.892: 98.6910% ( 4) 00:09:30.854 11846.892 - 11897.305: 98.7117% ( 4) 00:09:30.854 11897.305 - 11947.717: 98.7376% ( 5) 00:09:30.855 11947.717 - 11998.129: 98.7479% ( 2) 00:09:30.855 11998.129 - 12048.542: 98.7790% ( 6) 00:09:30.855 12048.542 - 12098.954: 98.7997% ( 4) 00:09:30.855 12098.954 - 12149.366: 98.8152% ( 3) 00:09:30.855 12149.366 - 12199.778: 98.8359% ( 4) 00:09:30.855 12199.778 - 12250.191: 98.8566% ( 4) 00:09:30.855 12250.191 - 12300.603: 98.8721% ( 3) 00:09:30.855 12300.603 - 12351.015: 98.8928% ( 4) 00:09:30.855 12351.015 - 12401.428: 98.9135% ( 4) 00:09:30.855 12401.428 - 12451.840: 98.9342% ( 4) 00:09:30.855 12451.840 - 12502.252: 98.9497% ( 3) 00:09:30.855 12502.252 - 12552.665: 98.9756% ( 5) 00:09:30.855 12552.665 - 12603.077: 99.0066% ( 6) 00:09:30.855 12603.077 - 12653.489: 99.0118% ( 1) 00:09:30.855 12653.489 - 12703.902: 99.0480% ( 7) 00:09:30.855 12703.902 - 12754.314: 99.0584% ( 2) 00:09:30.855 12754.314 - 12804.726: 99.0791% ( 4) 00:09:30.855 12804.726 - 12855.138: 99.1049% ( 5) 00:09:30.855 12855.138 - 12905.551: 99.1256% ( 4) 00:09:30.855 12905.551 - 13006.375: 99.1670% ( 8) 00:09:30.855 13006.375 - 13107.200: 99.2032% ( 7) 00:09:30.855 13107.200 - 13208.025: 99.2498% ( 9) 00:09:30.855 13208.025 - 13308.849: 99.2912% ( 8) 00:09:30.855 13308.849 - 13409.674: 99.3274% ( 7) 00:09:30.855 13409.674 - 13510.498: 99.3377% ( 2) 00:09:30.855 23996.258 - 24097.083: 99.3429% ( 1) 00:09:30.855 24097.083 - 24197.908: 99.3533% ( 2) 00:09:30.855 24197.908 - 24298.732: 99.3740% ( 4) 00:09:30.855 24298.732 - 24399.557: 99.3947% ( 4) 00:09:30.855 24399.557 - 24500.382: 99.4154% ( 4) 00:09:30.855 24500.382 - 24601.206: 99.4309% ( 3) 00:09:30.855 24601.206 - 24702.031: 99.4464% ( 3) 00:09:30.855 24702.031 - 24802.855: 99.4671% ( 4) 00:09:30.855 24802.855 - 24903.680: 99.4826% ( 3) 00:09:30.855 24903.680 - 25004.505: 99.5085% ( 5) 00:09:30.855 25004.505 - 25105.329: 99.5188% ( 2) 00:09:30.855 25105.329 - 25206.154: 99.5344% ( 3) 00:09:30.855 25206.154 - 25306.978: 99.5550% ( 4) 00:09:30.855 25306.978 - 25407.803: 99.5757% ( 4) 00:09:30.855 25407.803 - 25508.628: 99.5964% ( 4) 00:09:30.855 25508.628 - 25609.452: 99.6120% ( 3) 00:09:30.855 25609.452 - 25710.277: 99.6327% ( 4) 00:09:30.855 25710.277 - 25811.102: 99.6534% ( 4) 00:09:30.855 25811.102 - 26012.751: 99.6792% ( 5) 00:09:30.855 26012.751 - 26214.400: 99.7206% ( 8) 00:09:30.855 26214.400 - 26416.049: 99.7568% ( 7) 00:09:30.855 26416.049 - 26617.698: 99.7982% ( 8) 00:09:30.855 26617.698 - 26819.348: 99.8344% ( 7) 00:09:30.855 26819.348 - 27020.997: 99.8758% ( 8) 00:09:30.855 27020.997 - 27222.646: 99.9120% ( 7) 00:09:30.855 27222.646 - 27424.295: 99.9483% ( 7) 00:09:30.855 27424.295 - 27625.945: 99.9897% ( 8) 00:09:30.855 27625.945 - 27827.594: 100.0000% ( 2) 00:09:30.855 00:09:30.855 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:30.855 ============================================================================== 00:09:30.855 Range in us Cumulative IO count 00:09:30.855 5091.643 - 5116.849: 0.0155% ( 3) 00:09:30.855 5116.849 - 5142.055: 0.0776% ( 12) 00:09:30.855 5142.055 - 5167.262: 0.1087% ( 6) 00:09:30.855 5167.262 - 5192.468: 0.2328% ( 24) 00:09:30.855 5192.468 - 5217.674: 0.5329% ( 58) 00:09:30.855 5217.674 - 5242.880: 1.0762% ( 105) 00:09:30.855 5242.880 - 5268.086: 1.5935% ( 100) 00:09:30.855 5268.086 - 5293.292: 2.2041% ( 118) 00:09:30.855 5293.292 - 5318.498: 2.8663% ( 128) 00:09:30.855 5318.498 - 5343.705: 3.5648% ( 135) 00:09:30.855 5343.705 - 5368.911: 4.2529% ( 133) 00:09:30.855 5368.911 - 5394.117: 4.9979% ( 144) 00:09:30.855 5394.117 - 5419.323: 5.7067% ( 137) 00:09:30.855 5419.323 - 5444.529: 6.4776% ( 149) 00:09:30.855 5444.529 - 5469.735: 7.2951% ( 158) 00:09:30.855 5469.735 - 5494.942: 8.2212% ( 179) 00:09:30.855 5494.942 - 5520.148: 9.1422% ( 178) 00:09:30.855 5520.148 - 5545.354: 10.0838% ( 182) 00:09:30.855 5545.354 - 5570.560: 11.0927% ( 195) 00:09:30.855 5570.560 - 5595.766: 12.1534% ( 205) 00:09:30.855 5595.766 - 5620.972: 13.3537% ( 232) 00:09:30.855 5620.972 - 5646.178: 14.6006% ( 241) 00:09:30.855 5646.178 - 5671.385: 15.8785% ( 247) 00:09:30.855 5671.385 - 5696.591: 17.0788% ( 232) 00:09:30.855 5696.591 - 5721.797: 18.2895% ( 234) 00:09:30.855 5721.797 - 5747.003: 19.5830% ( 250) 00:09:30.855 5747.003 - 5772.209: 20.8764% ( 250) 00:09:30.855 5772.209 - 5797.415: 22.1233% ( 241) 00:09:30.855 5797.415 - 5822.622: 23.4065% ( 248) 00:09:30.855 5822.622 - 5847.828: 24.6637% ( 243) 00:09:30.855 5847.828 - 5873.034: 25.9365% ( 246) 00:09:30.855 5873.034 - 5898.240: 27.2454% ( 253) 00:09:30.855 5898.240 - 5923.446: 28.5286% ( 248) 00:09:30.855 5923.446 - 5948.652: 29.8065% ( 247) 00:09:30.855 5948.652 - 5973.858: 31.0637% ( 243) 00:09:30.855 5973.858 - 5999.065: 32.3262% ( 244) 00:09:30.855 5999.065 - 6024.271: 33.6041% ( 247) 00:09:30.855 6024.271 - 6049.477: 34.9183% ( 254) 00:09:30.855 6049.477 - 6074.683: 36.2221% ( 252) 00:09:30.855 6074.683 - 6099.889: 37.5362% ( 254) 00:09:30.855 6099.889 - 6125.095: 38.8866% ( 261) 00:09:30.855 6125.095 - 6150.302: 40.1852% ( 251) 00:09:30.855 6150.302 - 6175.508: 41.4890% ( 252) 00:09:30.855 6175.508 - 6200.714: 42.7514% ( 244) 00:09:30.855 6200.714 - 6225.920: 44.0811% ( 257) 00:09:30.855 6225.920 - 6251.126: 45.4005% ( 255) 00:09:30.855 6251.126 - 6276.332: 46.7198% ( 255) 00:09:30.855 6276.332 - 6301.538: 48.0184% ( 251) 00:09:30.855 6301.538 - 6326.745: 49.3274% ( 253) 00:09:30.855 6326.745 - 6351.951: 50.6519% ( 256) 00:09:30.855 6351.951 - 6377.157: 51.9661% ( 254) 00:09:30.855 6377.157 - 6402.363: 53.3061% ( 259) 00:09:30.855 6402.363 - 6427.569: 54.6358% ( 257) 00:09:30.855 6427.569 - 6452.775: 55.9654% ( 257) 00:09:30.855 6452.775 - 6503.188: 58.6093% ( 511) 00:09:30.855 6503.188 - 6553.600: 61.2272% ( 506) 00:09:30.855 6553.600 - 6604.012: 63.8659% ( 510) 00:09:30.855 6604.012 - 6654.425: 66.5046% ( 510) 00:09:30.855 6654.425 - 6704.837: 69.1432% ( 510) 00:09:30.855 6704.837 - 6755.249: 71.7457% ( 503) 00:09:30.855 6755.249 - 6805.662: 74.3015% ( 494) 00:09:30.855 6805.662 - 6856.074: 76.9402% ( 510) 00:09:30.855 6856.074 - 6906.486: 79.2632% ( 449) 00:09:30.855 6906.486 - 6956.898: 81.0689% ( 349) 00:09:30.855 6956.898 - 7007.311: 82.6211% ( 300) 00:09:30.855 7007.311 - 7057.723: 84.0594% ( 278) 00:09:30.855 7057.723 - 7108.135: 85.3115% ( 242) 00:09:30.855 7108.135 - 7158.548: 86.3411% ( 199) 00:09:30.855 7158.548 - 7208.960: 87.1637% ( 159) 00:09:30.855 7208.960 - 7259.372: 87.7276% ( 109) 00:09:30.855 7259.372 - 7309.785: 88.1209% ( 76) 00:09:30.855 7309.785 - 7360.197: 88.4209% ( 58) 00:09:30.855 7360.197 - 7410.609: 88.6538% ( 45) 00:09:30.855 7410.609 - 7461.022: 88.8504% ( 38) 00:09:30.855 7461.022 - 7511.434: 89.0263% ( 34) 00:09:30.855 7511.434 - 7561.846: 89.2281% ( 39) 00:09:30.855 7561.846 - 7612.258: 89.4402% ( 41) 00:09:30.855 7612.258 - 7662.671: 89.6523% ( 41) 00:09:30.855 7662.671 - 7713.083: 89.8231% ( 33) 00:09:30.855 7713.083 - 7763.495: 89.9990% ( 34) 00:09:30.855 7763.495 - 7813.908: 90.1956% ( 38) 00:09:30.855 7813.908 - 7864.320: 90.3715% ( 34) 00:09:30.855 7864.320 - 7914.732: 90.5422% ( 33) 00:09:30.855 7914.732 - 7965.145: 90.7543% ( 41) 00:09:30.855 7965.145 - 8015.557: 90.9406% ( 36) 00:09:30.855 8015.557 - 8065.969: 91.1217% ( 35) 00:09:30.855 8065.969 - 8116.382: 91.3079% ( 36) 00:09:30.855 8116.382 - 8166.794: 91.4890% ( 35) 00:09:30.855 8166.794 - 8217.206: 91.6908% ( 39) 00:09:30.855 8217.206 - 8267.618: 91.8822% ( 37) 00:09:30.855 8267.618 - 8318.031: 92.0788% ( 38) 00:09:30.855 8318.031 - 8368.443: 92.2651% ( 36) 00:09:30.855 8368.443 - 8418.855: 92.4565% ( 37) 00:09:30.855 8418.855 - 8469.268: 92.6997% ( 47) 00:09:30.855 8469.268 - 8519.680: 92.8911% ( 37) 00:09:30.855 8519.680 - 8570.092: 93.0929% ( 39) 00:09:30.855 8570.092 - 8620.505: 93.3102% ( 42) 00:09:30.855 8620.505 - 8670.917: 93.4965% ( 36) 00:09:30.855 8670.917 - 8721.329: 93.6931% ( 38) 00:09:30.855 8721.329 - 8771.742: 93.8742% ( 35) 00:09:30.855 8771.742 - 8822.154: 94.0656% ( 37) 00:09:30.855 8822.154 - 8872.566: 94.2415% ( 34) 00:09:30.855 8872.566 - 8922.978: 94.4381% ( 38) 00:09:30.855 8922.978 - 8973.391: 94.6347% ( 38) 00:09:30.855 8973.391 - 9023.803: 94.8055% ( 33) 00:09:30.856 9023.803 - 9074.215: 94.9762% ( 33) 00:09:30.856 9074.215 - 9124.628: 95.1625% ( 36) 00:09:30.856 9124.628 - 9175.040: 95.3332% ( 33) 00:09:30.856 9175.040 - 9225.452: 95.4884% ( 30) 00:09:30.856 9225.452 - 9275.865: 95.6229% ( 26) 00:09:30.856 9275.865 - 9326.277: 95.7626% ( 27) 00:09:30.856 9326.277 - 9376.689: 95.8868% ( 24) 00:09:30.856 9376.689 - 9427.102: 96.0161% ( 25) 00:09:30.856 9427.102 - 9477.514: 96.1507% ( 26) 00:09:30.856 9477.514 - 9527.926: 96.2645% ( 22) 00:09:30.856 9527.926 - 9578.338: 96.3628% ( 19) 00:09:30.856 9578.338 - 9628.751: 96.4507% ( 17) 00:09:30.856 9628.751 - 9679.163: 96.5490% ( 19) 00:09:30.856 9679.163 - 9729.575: 96.6422% ( 18) 00:09:30.856 9729.575 - 9779.988: 96.7301% ( 17) 00:09:30.856 9779.988 - 9830.400: 96.8129% ( 16) 00:09:30.856 9830.400 - 9880.812: 96.8957% ( 16) 00:09:30.856 9880.812 - 9931.225: 96.9733% ( 15) 00:09:30.856 9931.225 - 9981.637: 97.0457% ( 14) 00:09:30.856 9981.637 - 10032.049: 97.1182% ( 14) 00:09:30.856 10032.049 - 10082.462: 97.1803% ( 12) 00:09:30.856 10082.462 - 10132.874: 97.2320% ( 10) 00:09:30.856 10132.874 - 10183.286: 97.2837% ( 10) 00:09:30.856 10183.286 - 10233.698: 97.3406% ( 11) 00:09:30.856 10233.698 - 10284.111: 97.3976% ( 11) 00:09:30.856 10284.111 - 10334.523: 97.4596% ( 12) 00:09:30.856 10334.523 - 10384.935: 97.5166% ( 11) 00:09:30.856 10384.935 - 10435.348: 97.5579% ( 8) 00:09:30.856 10435.348 - 10485.760: 97.6149% ( 11) 00:09:30.856 10485.760 - 10536.172: 97.6666% ( 10) 00:09:30.856 10536.172 - 10586.585: 97.7132% ( 9) 00:09:30.856 10586.585 - 10636.997: 97.7494% ( 7) 00:09:30.856 10636.997 - 10687.409: 97.7804% ( 6) 00:09:30.856 10687.409 - 10737.822: 97.8063% ( 5) 00:09:30.856 10737.822 - 10788.234: 97.8373% ( 6) 00:09:30.856 10788.234 - 10838.646: 97.8684% ( 6) 00:09:30.856 10838.646 - 10889.058: 97.8942% ( 5) 00:09:30.856 10889.058 - 10939.471: 97.9201% ( 5) 00:09:30.856 10939.471 - 10989.883: 97.9512% ( 6) 00:09:30.856 10989.883 - 11040.295: 97.9719% ( 4) 00:09:30.856 11040.295 - 11090.708: 98.0029% ( 6) 00:09:30.856 11090.708 - 11141.120: 98.0288% ( 5) 00:09:30.856 11141.120 - 11191.532: 98.0598% ( 6) 00:09:30.856 11191.532 - 11241.945: 98.0909% ( 6) 00:09:30.856 11241.945 - 11292.357: 98.1167% ( 5) 00:09:30.856 11292.357 - 11342.769: 98.1478% ( 6) 00:09:30.856 11342.769 - 11393.182: 98.1736% ( 5) 00:09:30.856 11393.182 - 11443.594: 98.2047% ( 6) 00:09:30.856 11443.594 - 11494.006: 98.2305% ( 5) 00:09:30.856 11494.006 - 11544.418: 98.2616% ( 6) 00:09:30.856 11544.418 - 11594.831: 98.2926% ( 6) 00:09:30.856 11594.831 - 11645.243: 98.3237% ( 6) 00:09:30.856 11645.243 - 11695.655: 98.3599% ( 7) 00:09:30.856 11695.655 - 11746.068: 98.4013% ( 8) 00:09:30.856 11746.068 - 11796.480: 98.4478% ( 9) 00:09:30.856 11796.480 - 11846.892: 98.4789% ( 6) 00:09:30.856 11846.892 - 11897.305: 98.5255% ( 9) 00:09:30.856 11897.305 - 11947.717: 98.5513% ( 5) 00:09:30.856 11947.717 - 11998.129: 98.5720% ( 4) 00:09:30.856 11998.129 - 12048.542: 98.5927% ( 4) 00:09:30.856 12048.542 - 12098.954: 98.6134% ( 4) 00:09:30.856 12098.954 - 12149.366: 98.6393% ( 5) 00:09:30.856 12149.366 - 12199.778: 98.6651% ( 5) 00:09:30.856 12199.778 - 12250.191: 98.6858% ( 4) 00:09:30.856 12250.191 - 12300.603: 98.7117% ( 5) 00:09:30.856 12300.603 - 12351.015: 98.7376% ( 5) 00:09:30.856 12351.015 - 12401.428: 98.7635% ( 5) 00:09:30.856 12401.428 - 12451.840: 98.7841% ( 4) 00:09:30.856 12451.840 - 12502.252: 98.8100% ( 5) 00:09:30.856 12502.252 - 12552.665: 98.8359% ( 5) 00:09:30.856 12552.665 - 12603.077: 98.8618% ( 5) 00:09:30.856 12603.077 - 12653.489: 98.8825% ( 4) 00:09:30.856 12653.489 - 12703.902: 98.8980% ( 3) 00:09:30.856 12703.902 - 12754.314: 98.9135% ( 3) 00:09:30.856 12754.314 - 12804.726: 98.9187% ( 1) 00:09:30.856 12804.726 - 12855.138: 98.9290% ( 2) 00:09:30.856 12855.138 - 12905.551: 98.9394% ( 2) 00:09:30.856 12905.551 - 13006.375: 98.9549% ( 3) 00:09:30.856 13006.375 - 13107.200: 98.9808% ( 5) 00:09:30.856 13107.200 - 13208.025: 99.0014% ( 4) 00:09:30.856 13208.025 - 13308.849: 99.0221% ( 4) 00:09:30.856 13308.849 - 13409.674: 99.0428% ( 4) 00:09:30.856 13409.674 - 13510.498: 99.0635% ( 4) 00:09:30.856 13510.498 - 13611.323: 99.0842% ( 4) 00:09:30.856 13611.323 - 13712.148: 99.1049% ( 4) 00:09:30.856 13712.148 - 13812.972: 99.1308% ( 5) 00:09:30.856 13812.972 - 13913.797: 99.1515% ( 4) 00:09:30.856 13913.797 - 14014.622: 99.1722% ( 4) 00:09:30.856 14014.622 - 14115.446: 99.1929% ( 4) 00:09:30.856 14115.446 - 14216.271: 99.2136% ( 4) 00:09:30.856 14216.271 - 14317.095: 99.2343% ( 4) 00:09:30.856 14317.095 - 14417.920: 99.2550% ( 4) 00:09:30.856 14417.920 - 14518.745: 99.2757% ( 4) 00:09:30.856 14518.745 - 14619.569: 99.3015% ( 5) 00:09:30.856 14619.569 - 14720.394: 99.3222% ( 4) 00:09:30.856 14720.394 - 14821.218: 99.3377% ( 3) 00:09:30.856 23794.609 - 23895.434: 99.3584% ( 4) 00:09:30.856 23895.434 - 23996.258: 99.3791% ( 4) 00:09:30.856 23996.258 - 24097.083: 99.3998% ( 4) 00:09:30.856 24097.083 - 24197.908: 99.4102% ( 2) 00:09:30.856 24197.908 - 24298.732: 99.4257% ( 3) 00:09:30.856 24298.732 - 24399.557: 99.4412% ( 3) 00:09:30.856 24399.557 - 24500.382: 99.4567% ( 3) 00:09:30.856 24500.382 - 24601.206: 99.4774% ( 4) 00:09:30.856 24601.206 - 24702.031: 99.4981% ( 4) 00:09:30.856 24702.031 - 24802.855: 99.5188% ( 4) 00:09:30.856 24802.855 - 24903.680: 99.5344% ( 3) 00:09:30.856 24903.680 - 25004.505: 99.5550% ( 4) 00:09:30.856 25004.505 - 25105.329: 99.5757% ( 4) 00:09:30.856 25105.329 - 25206.154: 99.5964% ( 4) 00:09:30.856 25206.154 - 25306.978: 99.6171% ( 4) 00:09:30.856 25306.978 - 25407.803: 99.6378% ( 4) 00:09:30.856 25407.803 - 25508.628: 99.6534% ( 3) 00:09:30.856 25508.628 - 25609.452: 99.6740% ( 4) 00:09:30.856 25609.452 - 25710.277: 99.6947% ( 4) 00:09:30.856 25710.277 - 25811.102: 99.7103% ( 3) 00:09:30.856 25811.102 - 26012.751: 99.7517% ( 8) 00:09:30.856 26012.751 - 26214.400: 99.7879% ( 7) 00:09:30.856 26214.400 - 26416.049: 99.8293% ( 8) 00:09:30.856 26416.049 - 26617.698: 99.8655% ( 7) 00:09:30.856 26617.698 - 26819.348: 99.9069% ( 8) 00:09:30.856 26819.348 - 27020.997: 99.9483% ( 8) 00:09:30.856 27020.997 - 27222.646: 99.9897% ( 8) 00:09:30.856 27222.646 - 27424.295: 100.0000% ( 2) 00:09:30.856 00:09:30.856 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:30.856 ============================================================================== 00:09:30.856 Range in us Cumulative IO count 00:09:30.856 5091.643 - 5116.849: 0.0207% ( 4) 00:09:30.856 5116.849 - 5142.055: 0.0569% ( 7) 00:09:30.856 5142.055 - 5167.262: 0.0880% ( 6) 00:09:30.856 5167.262 - 5192.468: 0.1242% ( 7) 00:09:30.856 5192.468 - 5217.674: 0.3622% ( 46) 00:09:30.856 5217.674 - 5242.880: 0.8433% ( 93) 00:09:30.856 5242.880 - 5268.086: 1.5004% ( 127) 00:09:30.856 5268.086 - 5293.292: 2.2144% ( 138) 00:09:30.856 5293.292 - 5318.498: 2.8818% ( 129) 00:09:30.856 5318.498 - 5343.705: 3.6683% ( 152) 00:09:30.856 5343.705 - 5368.911: 4.3719% ( 136) 00:09:30.856 5368.911 - 5394.117: 5.1376% ( 148) 00:09:30.856 5394.117 - 5419.323: 5.8568% ( 139) 00:09:30.856 5419.323 - 5444.529: 6.6225% ( 148) 00:09:30.856 5444.529 - 5469.735: 7.4089% ( 152) 00:09:30.856 5469.735 - 5494.942: 8.2937% ( 171) 00:09:30.856 5494.942 - 5520.148: 9.2560% ( 186) 00:09:30.856 5520.148 - 5545.354: 10.2908% ( 200) 00:09:30.856 5545.354 - 5570.560: 11.4031% ( 215) 00:09:30.856 5570.560 - 5595.766: 12.4586% ( 204) 00:09:30.856 5595.766 - 5620.972: 13.5296% ( 207) 00:09:30.856 5620.972 - 5646.178: 14.7041% ( 227) 00:09:30.856 5646.178 - 5671.385: 15.8682% ( 225) 00:09:30.856 5671.385 - 5696.591: 17.0530% ( 229) 00:09:30.856 5696.591 - 5721.797: 18.2585% ( 233) 00:09:30.856 5721.797 - 5747.003: 19.5054% ( 241) 00:09:30.856 5747.003 - 5772.209: 20.7575% ( 242) 00:09:30.856 5772.209 - 5797.415: 22.1078% ( 261) 00:09:30.856 5797.415 - 5822.622: 23.4116% ( 252) 00:09:30.856 5822.622 - 5847.828: 24.6482% ( 239) 00:09:30.856 5847.828 - 5873.034: 25.9209% ( 246) 00:09:30.856 5873.034 - 5898.240: 27.2041% ( 248) 00:09:30.856 5898.240 - 5923.446: 28.5596% ( 262) 00:09:30.857 5923.446 - 5948.652: 29.8479% ( 249) 00:09:30.857 5948.652 - 5973.858: 31.1362% ( 249) 00:09:30.857 5973.858 - 5999.065: 32.3831% ( 241) 00:09:30.857 5999.065 - 6024.271: 33.6455% ( 244) 00:09:30.857 6024.271 - 6049.477: 34.9234% ( 247) 00:09:30.857 6049.477 - 6074.683: 36.2117% ( 249) 00:09:30.857 6074.683 - 6099.889: 37.5000% ( 249) 00:09:30.857 6099.889 - 6125.095: 38.7986% ( 251) 00:09:30.857 6125.095 - 6150.302: 40.0973% ( 251) 00:09:30.857 6150.302 - 6175.508: 41.4580% ( 263) 00:09:30.857 6175.508 - 6200.714: 42.7618% ( 252) 00:09:30.857 6200.714 - 6225.920: 44.0760% ( 254) 00:09:30.857 6225.920 - 6251.126: 45.4160% ( 259) 00:09:30.857 6251.126 - 6276.332: 46.7353% ( 255) 00:09:30.857 6276.332 - 6301.538: 48.0702% ( 258) 00:09:30.857 6301.538 - 6326.745: 49.4050% ( 258) 00:09:30.857 6326.745 - 6351.951: 50.7657% ( 263) 00:09:30.857 6351.951 - 6377.157: 52.1161% ( 261) 00:09:30.857 6377.157 - 6402.363: 53.5130% ( 270) 00:09:30.857 6402.363 - 6427.569: 54.8582% ( 260) 00:09:30.857 6427.569 - 6452.775: 56.2293% ( 265) 00:09:30.857 6452.775 - 6503.188: 58.8473% ( 506) 00:09:30.857 6503.188 - 6553.600: 61.5273% ( 518) 00:09:30.857 6553.600 - 6604.012: 64.2384% ( 524) 00:09:30.857 6604.012 - 6654.425: 66.8978% ( 514) 00:09:30.857 6654.425 - 6704.837: 69.5054% ( 504) 00:09:30.857 6704.837 - 6755.249: 72.1026% ( 502) 00:09:30.857 6755.249 - 6805.662: 74.7258% ( 507) 00:09:30.857 6805.662 - 6856.074: 77.2868% ( 495) 00:09:30.857 6856.074 - 6906.486: 79.5271% ( 433) 00:09:30.857 6906.486 - 6956.898: 81.4983% ( 381) 00:09:30.857 6956.898 - 7007.311: 83.1954% ( 328) 00:09:30.857 7007.311 - 7057.723: 84.6544% ( 282) 00:09:30.857 7057.723 - 7108.135: 85.9220% ( 245) 00:09:30.857 7108.135 - 7158.548: 86.9878% ( 206) 00:09:30.857 7158.548 - 7208.960: 87.9036% ( 177) 00:09:30.857 7208.960 - 7259.372: 88.5399% ( 123) 00:09:30.857 7259.372 - 7309.785: 88.9694% ( 83) 00:09:30.857 7309.785 - 7360.197: 89.3264% ( 69) 00:09:30.857 7360.197 - 7410.609: 89.6161% ( 56) 00:09:30.857 7410.609 - 7461.022: 89.8024% ( 36) 00:09:30.857 7461.022 - 7511.434: 89.9834% ( 35) 00:09:30.857 7511.434 - 7561.846: 90.1542% ( 33) 00:09:30.857 7561.846 - 7612.258: 90.3197% ( 32) 00:09:30.857 7612.258 - 7662.671: 90.4646% ( 28) 00:09:30.857 7662.671 - 7713.083: 90.6095% ( 28) 00:09:30.857 7713.083 - 7763.495: 90.7647% ( 30) 00:09:30.857 7763.495 - 7813.908: 90.9096% ( 28) 00:09:30.857 7813.908 - 7864.320: 91.0648% ( 30) 00:09:30.857 7864.320 - 7914.732: 91.2303% ( 32) 00:09:30.857 7914.732 - 7965.145: 91.4166% ( 36) 00:09:30.857 7965.145 - 8015.557: 91.5822% ( 32) 00:09:30.857 8015.557 - 8065.969: 91.7477% ( 32) 00:09:30.857 8065.969 - 8116.382: 91.9081% ( 31) 00:09:30.857 8116.382 - 8166.794: 92.0633% ( 30) 00:09:30.857 8166.794 - 8217.206: 92.2030% ( 27) 00:09:30.857 8217.206 - 8267.618: 92.3375% ( 26) 00:09:30.857 8267.618 - 8318.031: 92.4876% ( 29) 00:09:30.857 8318.031 - 8368.443: 92.6428% ( 30) 00:09:30.857 8368.443 - 8418.855: 92.8291% ( 36) 00:09:30.857 8418.855 - 8469.268: 93.0257% ( 38) 00:09:30.857 8469.268 - 8519.680: 93.2067% ( 35) 00:09:30.857 8519.680 - 8570.092: 93.3568% ( 29) 00:09:30.857 8570.092 - 8620.505: 93.5224% ( 32) 00:09:30.857 8620.505 - 8670.917: 93.6931% ( 33) 00:09:30.857 8670.917 - 8721.329: 93.8638% ( 33) 00:09:30.857 8721.329 - 8771.742: 94.0346% ( 33) 00:09:30.857 8771.742 - 8822.154: 94.2001% ( 32) 00:09:30.857 8822.154 - 8872.566: 94.3553% ( 30) 00:09:30.857 8872.566 - 8922.978: 94.5054% ( 29) 00:09:30.857 8922.978 - 8973.391: 94.6502% ( 28) 00:09:30.857 8973.391 - 9023.803: 94.8003% ( 29) 00:09:30.857 9023.803 - 9074.215: 94.9452% ( 28) 00:09:30.857 9074.215 - 9124.628: 95.0849% ( 27) 00:09:30.857 9124.628 - 9175.040: 95.2401% ( 30) 00:09:30.857 9175.040 - 9225.452: 95.3591% ( 23) 00:09:30.857 9225.452 - 9275.865: 95.4832% ( 24) 00:09:30.857 9275.865 - 9326.277: 95.6074% ( 24) 00:09:30.857 9326.277 - 9376.689: 95.7419% ( 26) 00:09:30.857 9376.689 - 9427.102: 95.8764% ( 26) 00:09:30.857 9427.102 - 9477.514: 96.0110% ( 26) 00:09:30.857 9477.514 - 9527.926: 96.1248% ( 22) 00:09:30.857 9527.926 - 9578.338: 96.2283% ( 20) 00:09:30.857 9578.338 - 9628.751: 96.3369% ( 21) 00:09:30.857 9628.751 - 9679.163: 96.4404% ( 20) 00:09:30.857 9679.163 - 9729.575: 96.5284% ( 17) 00:09:30.857 9729.575 - 9779.988: 96.6267% ( 19) 00:09:30.857 9779.988 - 9830.400: 96.7146% ( 17) 00:09:30.857 9830.400 - 9880.812: 96.8233% ( 21) 00:09:30.857 9880.812 - 9931.225: 96.9578% ( 26) 00:09:30.857 9931.225 - 9981.637: 97.0457% ( 17) 00:09:30.857 9981.637 - 10032.049: 97.1130% ( 13) 00:09:30.857 10032.049 - 10082.462: 97.1699% ( 11) 00:09:30.857 10082.462 - 10132.874: 97.2165% ( 9) 00:09:30.857 10132.874 - 10183.286: 97.2630% ( 9) 00:09:30.857 10183.286 - 10233.698: 97.3096% ( 9) 00:09:30.857 10233.698 - 10284.111: 97.3562% ( 9) 00:09:30.857 10284.111 - 10334.523: 97.4079% ( 10) 00:09:30.857 10334.523 - 10384.935: 97.4545% ( 9) 00:09:30.857 10384.935 - 10435.348: 97.5062% ( 10) 00:09:30.857 10435.348 - 10485.760: 97.5528% ( 9) 00:09:30.857 10485.760 - 10536.172: 97.6045% ( 10) 00:09:30.857 10536.172 - 10586.585: 97.6511% ( 9) 00:09:30.857 10586.585 - 10636.997: 97.6873% ( 7) 00:09:30.857 10636.997 - 10687.409: 97.7390% ( 10) 00:09:30.857 10687.409 - 10737.822: 97.7908% ( 10) 00:09:30.857 10737.822 - 10788.234: 97.8322% ( 8) 00:09:30.857 10788.234 - 10838.646: 97.8839% ( 10) 00:09:30.857 10838.646 - 10889.058: 97.9356% ( 10) 00:09:30.857 10889.058 - 10939.471: 97.9822% ( 9) 00:09:30.857 10939.471 - 10989.883: 98.0288% ( 9) 00:09:30.857 10989.883 - 11040.295: 98.0805% ( 10) 00:09:30.857 11040.295 - 11090.708: 98.1271% ( 9) 00:09:30.857 11090.708 - 11141.120: 98.1788% ( 10) 00:09:30.857 11141.120 - 11191.532: 98.2254% ( 9) 00:09:30.857 11191.532 - 11241.945: 98.2719% ( 9) 00:09:30.857 11241.945 - 11292.357: 98.3185% ( 9) 00:09:30.857 11292.357 - 11342.769: 98.3495% ( 6) 00:09:30.857 11342.769 - 11393.182: 98.3806% ( 6) 00:09:30.857 11393.182 - 11443.594: 98.4168% ( 7) 00:09:30.857 11443.594 - 11494.006: 98.4530% ( 7) 00:09:30.857 11494.006 - 11544.418: 98.4841% ( 6) 00:09:30.857 11544.418 - 11594.831: 98.5099% ( 5) 00:09:30.857 11594.831 - 11645.243: 98.5255% ( 3) 00:09:30.857 11645.243 - 11695.655: 98.5410% ( 3) 00:09:30.857 11695.655 - 11746.068: 98.5565% ( 3) 00:09:30.857 11746.068 - 11796.480: 98.5668% ( 2) 00:09:30.857 11796.480 - 11846.892: 98.5824% ( 3) 00:09:30.857 11846.892 - 11897.305: 98.5979% ( 3) 00:09:30.857 11897.305 - 11947.717: 98.6082% ( 2) 00:09:30.857 11947.717 - 11998.129: 98.6238% ( 3) 00:09:30.857 11998.129 - 12048.542: 98.6393% ( 3) 00:09:30.857 12048.542 - 12098.954: 98.6548% ( 3) 00:09:30.857 12098.954 - 12149.366: 98.6651% ( 2) 00:09:30.857 12149.366 - 12199.778: 98.6755% ( 2) 00:09:30.857 12502.252 - 12552.665: 98.6962% ( 4) 00:09:30.857 12552.665 - 12603.077: 98.7014% ( 1) 00:09:30.857 12603.077 - 12653.489: 98.7169% ( 3) 00:09:30.857 12653.489 - 12703.902: 98.7221% ( 1) 00:09:30.857 12703.902 - 12754.314: 98.7324% ( 2) 00:09:30.857 12754.314 - 12804.726: 98.7428% ( 2) 00:09:30.857 12804.726 - 12855.138: 98.7531% ( 2) 00:09:30.857 12855.138 - 12905.551: 98.7583% ( 1) 00:09:30.857 12905.551 - 13006.375: 98.7790% ( 4) 00:09:30.857 13006.375 - 13107.200: 98.7997% ( 4) 00:09:30.857 13107.200 - 13208.025: 98.8204% ( 4) 00:09:30.857 13208.025 - 13308.849: 98.8411% ( 4) 00:09:30.857 13308.849 - 13409.674: 98.8618% ( 4) 00:09:30.857 13409.674 - 13510.498: 98.8980% ( 7) 00:09:30.857 13510.498 - 13611.323: 98.9342% ( 7) 00:09:30.857 13611.323 - 13712.148: 98.9704% ( 7) 00:09:30.857 13712.148 - 13812.972: 99.0118% ( 8) 00:09:30.857 13812.972 - 13913.797: 99.0532% ( 8) 00:09:30.857 13913.797 - 14014.622: 99.0946% ( 8) 00:09:30.857 14014.622 - 14115.446: 99.1308% ( 7) 00:09:30.857 14115.446 - 14216.271: 99.1722% ( 8) 00:09:30.857 14216.271 - 14317.095: 99.2084% ( 7) 00:09:30.857 14317.095 - 14417.920: 99.2498% ( 8) 00:09:30.857 14417.920 - 14518.745: 99.2912% ( 8) 00:09:30.857 14518.745 - 14619.569: 99.3326% ( 8) 00:09:30.857 14619.569 - 14720.394: 99.3377% ( 1) 00:09:30.858 24601.206 - 24702.031: 99.3584% ( 4) 00:09:30.858 24702.031 - 24802.855: 99.3740% ( 3) 00:09:30.858 24802.855 - 24903.680: 99.3947% ( 4) 00:09:30.858 24903.680 - 25004.505: 99.4154% ( 4) 00:09:30.858 25004.505 - 25105.329: 99.4361% ( 4) 00:09:30.858 25105.329 - 25206.154: 99.4516% ( 3) 00:09:30.858 25206.154 - 25306.978: 99.4723% ( 4) 00:09:30.858 25306.978 - 25407.803: 99.4930% ( 4) 00:09:30.858 25407.803 - 25508.628: 99.5085% ( 3) 00:09:30.858 25508.628 - 25609.452: 99.5240% ( 3) 00:09:30.858 25609.452 - 25710.277: 99.5395% ( 3) 00:09:30.858 25710.277 - 25811.102: 99.5602% ( 4) 00:09:30.858 25811.102 - 26012.751: 99.6016% ( 8) 00:09:30.858 26012.751 - 26214.400: 99.6378% ( 7) 00:09:30.858 26214.400 - 26416.049: 99.6740% ( 7) 00:09:30.858 26416.049 - 26617.698: 99.7103% ( 7) 00:09:30.858 26617.698 - 26819.348: 99.7465% ( 7) 00:09:30.858 26819.348 - 27020.997: 99.7879% ( 8) 00:09:30.858 27020.997 - 27222.646: 99.8241% ( 7) 00:09:30.858 27222.646 - 27424.295: 99.8655% ( 8) 00:09:30.858 27424.295 - 27625.945: 99.9017% ( 7) 00:09:30.858 27625.945 - 27827.594: 99.9431% ( 8) 00:09:30.858 27827.594 - 28029.243: 99.9845% ( 8) 00:09:30.858 28029.243 - 28230.892: 100.0000% ( 3) 00:09:30.858 00:09:30.858 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:30.858 ============================================================================== 00:09:30.858 Range in us Cumulative IO count 00:09:30.858 5091.643 - 5116.849: 0.0103% ( 2) 00:09:30.858 5116.849 - 5142.055: 0.0362% ( 5) 00:09:30.858 5142.055 - 5167.262: 0.0673% ( 6) 00:09:30.858 5167.262 - 5192.468: 0.1604% ( 18) 00:09:30.858 5192.468 - 5217.674: 0.3156% ( 30) 00:09:30.858 5217.674 - 5242.880: 0.7399% ( 82) 00:09:30.858 5242.880 - 5268.086: 1.3297% ( 114) 00:09:30.858 5268.086 - 5293.292: 1.9505% ( 120) 00:09:30.858 5293.292 - 5318.498: 2.7059% ( 146) 00:09:30.858 5318.498 - 5343.705: 3.4096% ( 136) 00:09:30.858 5343.705 - 5368.911: 4.0925% ( 132) 00:09:30.858 5368.911 - 5394.117: 4.7185% ( 121) 00:09:30.858 5394.117 - 5419.323: 5.3911% ( 130) 00:09:30.858 5419.323 - 5444.529: 6.1207% ( 141) 00:09:30.858 5444.529 - 5469.735: 6.9329% ( 157) 00:09:30.858 5469.735 - 5494.942: 7.8280% ( 173) 00:09:30.858 5494.942 - 5520.148: 8.8007% ( 188) 00:09:30.858 5520.148 - 5545.354: 9.8406% ( 201) 00:09:30.858 5545.354 - 5570.560: 10.9530% ( 215) 00:09:30.858 5570.560 - 5595.766: 12.0395% ( 210) 00:09:30.858 5595.766 - 5620.972: 13.1881% ( 222) 00:09:30.858 5620.972 - 5646.178: 14.3781% ( 230) 00:09:30.858 5646.178 - 5671.385: 15.6198% ( 240) 00:09:30.858 5671.385 - 5696.591: 16.7943% ( 227) 00:09:30.858 5696.591 - 5721.797: 17.9946% ( 232) 00:09:30.858 5721.797 - 5747.003: 19.2415% ( 241) 00:09:30.858 5747.003 - 5772.209: 20.4988% ( 243) 00:09:30.858 5772.209 - 5797.415: 21.7043% ( 233) 00:09:30.858 5797.415 - 5822.622: 22.9874% ( 248) 00:09:30.858 5822.622 - 5847.828: 24.2653% ( 247) 00:09:30.858 5847.828 - 5873.034: 25.5226% ( 243) 00:09:30.858 5873.034 - 5898.240: 26.7695% ( 241) 00:09:30.858 5898.240 - 5923.446: 28.0474% ( 247) 00:09:30.858 5923.446 - 5948.652: 29.2995% ( 242) 00:09:30.858 5948.652 - 5973.858: 30.5774% ( 247) 00:09:30.858 5973.858 - 5999.065: 31.8657% ( 249) 00:09:30.858 5999.065 - 6024.271: 33.1385% ( 246) 00:09:30.858 6024.271 - 6049.477: 34.3750% ( 239) 00:09:30.858 6049.477 - 6074.683: 35.6426% ( 245) 00:09:30.858 6074.683 - 6099.889: 36.9671% ( 256) 00:09:30.858 6099.889 - 6125.095: 38.2761% ( 253) 00:09:30.858 6125.095 - 6150.302: 39.5902% ( 254) 00:09:30.858 6150.302 - 6175.508: 40.8940% ( 252) 00:09:30.858 6175.508 - 6200.714: 42.2030% ( 253) 00:09:30.858 6200.714 - 6225.920: 43.5224% ( 255) 00:09:30.858 6225.920 - 6251.126: 44.8313% ( 253) 00:09:30.858 6251.126 - 6276.332: 46.1196% ( 249) 00:09:30.858 6276.332 - 6301.538: 47.4441% ( 256) 00:09:30.858 6301.538 - 6326.745: 48.7686% ( 256) 00:09:30.858 6326.745 - 6351.951: 50.0569% ( 249) 00:09:30.858 6351.951 - 6377.157: 51.4280% ( 265) 00:09:30.858 6377.157 - 6402.363: 52.7318% ( 252) 00:09:30.858 6402.363 - 6427.569: 54.0718% ( 259) 00:09:30.858 6427.569 - 6452.775: 55.3911% ( 255) 00:09:30.858 6452.775 - 6503.188: 58.0867% ( 521) 00:09:30.858 6503.188 - 6553.600: 60.7616% ( 517) 00:09:30.858 6553.600 - 6604.012: 63.4313% ( 516) 00:09:30.858 6604.012 - 6654.425: 66.1062% ( 517) 00:09:30.858 6654.425 - 6704.837: 68.7759% ( 516) 00:09:30.858 6704.837 - 6755.249: 71.5232% ( 531) 00:09:30.858 6755.249 - 6805.662: 74.1825% ( 514) 00:09:30.858 6805.662 - 6856.074: 76.8057% ( 507) 00:09:30.858 6856.074 - 6906.486: 79.0666% ( 437) 00:09:30.858 6906.486 - 6956.898: 80.9810% ( 370) 00:09:30.858 6956.898 - 7007.311: 82.6159% ( 316) 00:09:30.858 7007.311 - 7057.723: 84.1215% ( 291) 00:09:30.858 7057.723 - 7108.135: 85.4460% ( 256) 00:09:30.858 7108.135 - 7158.548: 86.5118% ( 206) 00:09:30.858 7158.548 - 7208.960: 87.3862% ( 169) 00:09:30.858 7208.960 - 7259.372: 87.9967% ( 118) 00:09:30.858 7259.372 - 7309.785: 88.4830% ( 94) 00:09:30.858 7309.785 - 7360.197: 88.8969% ( 80) 00:09:30.858 7360.197 - 7410.609: 89.2281% ( 64) 00:09:30.858 7410.609 - 7461.022: 89.5126% ( 55) 00:09:30.858 7461.022 - 7511.434: 89.7196% ( 40) 00:09:30.858 7511.434 - 7561.846: 89.9627% ( 47) 00:09:30.858 7561.846 - 7612.258: 90.1749% ( 41) 00:09:30.858 7612.258 - 7662.671: 90.4077% ( 45) 00:09:30.858 7662.671 - 7713.083: 90.6250% ( 42) 00:09:30.858 7713.083 - 7763.495: 90.8164% ( 37) 00:09:30.858 7763.495 - 7813.908: 91.0027% ( 36) 00:09:30.858 7813.908 - 7864.320: 91.2045% ( 39) 00:09:30.858 7864.320 - 7914.732: 91.4062% ( 39) 00:09:30.858 7914.732 - 7965.145: 91.6184% ( 41) 00:09:30.858 7965.145 - 8015.557: 91.8098% ( 37) 00:09:30.858 8015.557 - 8065.969: 91.9754% ( 32) 00:09:30.858 8065.969 - 8116.382: 92.1306% ( 30) 00:09:30.858 8116.382 - 8166.794: 92.3013% ( 33) 00:09:30.858 8166.794 - 8217.206: 92.4669% ( 32) 00:09:30.858 8217.206 - 8267.618: 92.6583% ( 37) 00:09:30.858 8267.618 - 8318.031: 92.8342% ( 34) 00:09:30.858 8318.031 - 8368.443: 93.0464% ( 41) 00:09:30.858 8368.443 - 8418.855: 93.2637% ( 42) 00:09:30.858 8418.855 - 8469.268: 93.4706% ( 40) 00:09:30.858 8469.268 - 8519.680: 93.6569% ( 36) 00:09:30.858 8519.680 - 8570.092: 93.8224% ( 32) 00:09:30.858 8570.092 - 8620.505: 93.9518% ( 25) 00:09:30.858 8620.505 - 8670.917: 94.0966% ( 28) 00:09:30.858 8670.917 - 8721.329: 94.2363% ( 27) 00:09:30.858 8721.329 - 8771.742: 94.3760% ( 27) 00:09:30.858 8771.742 - 8822.154: 94.5209% ( 28) 00:09:30.858 8822.154 - 8872.566: 94.6658% ( 28) 00:09:30.858 8872.566 - 8922.978: 94.8106% ( 28) 00:09:30.858 8922.978 - 8973.391: 94.9348% ( 24) 00:09:30.858 8973.391 - 9023.803: 95.0435% ( 21) 00:09:30.859 9023.803 - 9074.215: 95.1780% ( 26) 00:09:30.859 9074.215 - 9124.628: 95.3022% ( 24) 00:09:30.859 9124.628 - 9175.040: 95.4263% ( 24) 00:09:30.859 9175.040 - 9225.452: 95.5401% ( 22) 00:09:30.859 9225.452 - 9275.865: 95.6488% ( 21) 00:09:30.859 9275.865 - 9326.277: 95.7626% ( 22) 00:09:30.859 9326.277 - 9376.689: 95.8764% ( 22) 00:09:30.859 9376.689 - 9427.102: 95.9851% ( 21) 00:09:30.859 9427.102 - 9477.514: 96.0938% ( 21) 00:09:30.859 9477.514 - 9527.926: 96.1869% ( 18) 00:09:30.859 9527.926 - 9578.338: 96.3111% ( 24) 00:09:30.859 9578.338 - 9628.751: 96.4249% ( 22) 00:09:30.859 9628.751 - 9679.163: 96.5077% ( 16) 00:09:30.859 9679.163 - 9729.575: 96.5904% ( 16) 00:09:30.859 9729.575 - 9779.988: 96.6680% ( 15) 00:09:30.859 9779.988 - 9830.400: 96.7508% ( 16) 00:09:30.859 9830.400 - 9880.812: 96.8595% ( 21) 00:09:30.859 9880.812 - 9931.225: 96.9785% ( 23) 00:09:30.859 9931.225 - 9981.637: 97.0716% ( 18) 00:09:30.859 9981.637 - 10032.049: 97.1699% ( 19) 00:09:30.859 10032.049 - 10082.462: 97.2579% ( 17) 00:09:30.859 10082.462 - 10132.874: 97.3510% ( 18) 00:09:30.859 10132.874 - 10183.286: 97.4338% ( 16) 00:09:30.859 10183.286 - 10233.698: 97.4959% ( 12) 00:09:30.859 10233.698 - 10284.111: 97.5631% ( 13) 00:09:30.859 10284.111 - 10334.523: 97.6304% ( 13) 00:09:30.859 10334.523 - 10384.935: 97.6873% ( 11) 00:09:30.859 10384.935 - 10435.348: 97.7390% ( 10) 00:09:30.859 10435.348 - 10485.760: 97.7908% ( 10) 00:09:30.859 10485.760 - 10536.172: 97.8425% ( 10) 00:09:30.859 10536.172 - 10586.585: 97.8994% ( 11) 00:09:30.859 10586.585 - 10636.997: 97.9460% ( 9) 00:09:30.859 10636.997 - 10687.409: 97.9977% ( 10) 00:09:30.859 10687.409 - 10737.822: 98.0443% ( 9) 00:09:30.859 10737.822 - 10788.234: 98.0960% ( 10) 00:09:30.859 10788.234 - 10838.646: 98.1478% ( 10) 00:09:30.859 10838.646 - 10889.058: 98.1943% ( 9) 00:09:30.859 10889.058 - 10939.471: 98.2409% ( 9) 00:09:30.859 10939.471 - 10989.883: 98.2771% ( 7) 00:09:30.859 10989.883 - 11040.295: 98.3133% ( 7) 00:09:30.859 11040.295 - 11090.708: 98.3495% ( 7) 00:09:30.859 11090.708 - 11141.120: 98.3858% ( 7) 00:09:30.859 11141.120 - 11191.532: 98.4272% ( 8) 00:09:30.859 11191.532 - 11241.945: 98.4634% ( 7) 00:09:30.859 11241.945 - 11292.357: 98.4944% ( 6) 00:09:30.859 11292.357 - 11342.769: 98.5306% ( 7) 00:09:30.859 11342.769 - 11393.182: 98.5720% ( 8) 00:09:30.859 11393.182 - 11443.594: 98.6341% ( 12) 00:09:30.859 11443.594 - 11494.006: 98.6755% ( 8) 00:09:30.859 11494.006 - 11544.418: 98.7065% ( 6) 00:09:30.859 11544.418 - 11594.831: 98.7324% ( 5) 00:09:30.859 11594.831 - 11645.243: 98.7376% ( 1) 00:09:30.859 11645.243 - 11695.655: 98.7479% ( 2) 00:09:30.859 11695.655 - 11746.068: 98.7635% ( 3) 00:09:30.859 11746.068 - 11796.480: 98.7738% ( 2) 00:09:30.859 11796.480 - 11846.892: 98.7841% ( 2) 00:09:30.859 11846.892 - 11897.305: 98.7945% ( 2) 00:09:30.859 11897.305 - 11947.717: 98.8048% ( 2) 00:09:30.859 11947.717 - 11998.129: 98.8152% ( 2) 00:09:30.859 11998.129 - 12048.542: 98.8255% ( 2) 00:09:30.859 12048.542 - 12098.954: 98.8307% ( 1) 00:09:30.859 12098.954 - 12149.366: 98.8411% ( 2) 00:09:30.859 12149.366 - 12199.778: 98.8566% ( 3) 00:09:30.859 12199.778 - 12250.191: 98.8669% ( 2) 00:09:30.859 12250.191 - 12300.603: 98.8773% ( 2) 00:09:30.859 12300.603 - 12351.015: 98.8876% ( 2) 00:09:30.859 12351.015 - 12401.428: 98.8980% ( 2) 00:09:30.859 12401.428 - 12451.840: 98.9083% ( 2) 00:09:30.859 12451.840 - 12502.252: 98.9187% ( 2) 00:09:30.859 12502.252 - 12552.665: 98.9290% ( 2) 00:09:30.859 12552.665 - 12603.077: 98.9394% ( 2) 00:09:30.859 12603.077 - 12653.489: 98.9497% ( 2) 00:09:30.859 12653.489 - 12703.902: 98.9652% ( 3) 00:09:30.859 12703.902 - 12754.314: 98.9756% ( 2) 00:09:30.859 12754.314 - 12804.726: 98.9859% ( 2) 00:09:30.859 12804.726 - 12855.138: 98.9963% ( 2) 00:09:30.859 12855.138 - 12905.551: 99.0066% ( 2) 00:09:30.859 12905.551 - 13006.375: 99.0273% ( 4) 00:09:30.859 13006.375 - 13107.200: 99.0480% ( 4) 00:09:30.859 13107.200 - 13208.025: 99.0687% ( 4) 00:09:30.859 13208.025 - 13308.849: 99.0894% ( 4) 00:09:30.859 13308.849 - 13409.674: 99.1153% ( 5) 00:09:30.859 13409.674 - 13510.498: 99.1308% ( 3) 00:09:30.859 13510.498 - 13611.323: 99.1515% ( 4) 00:09:30.859 13611.323 - 13712.148: 99.1722% ( 4) 00:09:30.859 13712.148 - 13812.972: 99.1929% ( 4) 00:09:30.859 13812.972 - 13913.797: 99.2188% ( 5) 00:09:30.859 13913.797 - 14014.622: 99.2343% ( 3) 00:09:30.859 14014.622 - 14115.446: 99.2498% ( 3) 00:09:30.859 14115.446 - 14216.271: 99.2653% ( 3) 00:09:30.859 14216.271 - 14317.095: 99.2912% ( 5) 00:09:30.859 14317.095 - 14417.920: 99.3119% ( 4) 00:09:30.859 14417.920 - 14518.745: 99.3326% ( 4) 00:09:30.859 14518.745 - 14619.569: 99.3377% ( 1) 00:09:30.859 23996.258 - 24097.083: 99.3429% ( 1) 00:09:30.859 24097.083 - 24197.908: 99.3636% ( 4) 00:09:30.859 24197.908 - 24298.732: 99.3895% ( 5) 00:09:30.859 24298.732 - 24399.557: 99.4102% ( 4) 00:09:30.859 24399.557 - 24500.382: 99.4257% ( 3) 00:09:30.859 24500.382 - 24601.206: 99.4464% ( 4) 00:09:30.859 24601.206 - 24702.031: 99.4619% ( 3) 00:09:30.859 24702.031 - 24802.855: 99.4826% ( 4) 00:09:30.859 24802.855 - 24903.680: 99.5033% ( 4) 00:09:30.859 24903.680 - 25004.505: 99.5240% ( 4) 00:09:30.859 25004.505 - 25105.329: 99.5447% ( 4) 00:09:30.859 25105.329 - 25206.154: 99.5602% ( 3) 00:09:30.859 25206.154 - 25306.978: 99.5809% ( 4) 00:09:30.859 25306.978 - 25407.803: 99.5964% ( 3) 00:09:30.859 25407.803 - 25508.628: 99.6120% ( 3) 00:09:30.859 25508.628 - 25609.452: 99.6275% ( 3) 00:09:30.859 25609.452 - 25710.277: 99.6430% ( 3) 00:09:30.859 25710.277 - 25811.102: 99.6637% ( 4) 00:09:30.859 25811.102 - 26012.751: 99.7051% ( 8) 00:09:30.859 26012.751 - 26214.400: 99.7413% ( 7) 00:09:30.859 26214.400 - 26416.049: 99.7775% ( 7) 00:09:30.859 26416.049 - 26617.698: 99.8189% ( 8) 00:09:30.859 26617.698 - 26819.348: 99.8603% ( 8) 00:09:30.859 26819.348 - 27020.997: 99.8965% ( 7) 00:09:30.859 27020.997 - 27222.646: 99.9327% ( 7) 00:09:30.859 27222.646 - 27424.295: 99.9741% ( 8) 00:09:30.859 27424.295 - 27625.945: 100.0000% ( 5) 00:09:30.859 00:09:30.859 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:30.859 ============================================================================== 00:09:30.859 Range in us Cumulative IO count 00:09:30.859 5142.055 - 5167.262: 0.0414% ( 8) 00:09:30.859 5167.262 - 5192.468: 0.1811% ( 27) 00:09:30.859 5192.468 - 5217.674: 0.2949% ( 22) 00:09:30.859 5217.674 - 5242.880: 0.6933% ( 77) 00:09:30.859 5242.880 - 5268.086: 1.2986% ( 117) 00:09:30.859 5268.086 - 5293.292: 2.0488% ( 145) 00:09:30.859 5293.292 - 5318.498: 2.6490% ( 116) 00:09:30.859 5318.498 - 5343.705: 3.2078% ( 108) 00:09:30.859 5343.705 - 5368.911: 3.8597% ( 126) 00:09:30.859 5368.911 - 5394.117: 4.6720% ( 157) 00:09:30.859 5394.117 - 5419.323: 5.4532% ( 151) 00:09:30.859 5419.323 - 5444.529: 6.1620% ( 137) 00:09:30.859 5444.529 - 5469.735: 6.8812% ( 139) 00:09:30.859 5469.735 - 5494.942: 7.7608% ( 170) 00:09:30.859 5494.942 - 5520.148: 8.6507% ( 172) 00:09:30.859 5520.148 - 5545.354: 9.7010% ( 203) 00:09:30.859 5545.354 - 5570.560: 10.7978% ( 212) 00:09:30.859 5570.560 - 5595.766: 11.9102% ( 215) 00:09:30.859 5595.766 - 5620.972: 13.0226% ( 215) 00:09:30.859 5620.972 - 5646.178: 14.1349% ( 215) 00:09:30.859 5646.178 - 5671.385: 15.3611% ( 237) 00:09:30.859 5671.385 - 5696.591: 16.6132% ( 242) 00:09:30.859 5696.591 - 5721.797: 17.8498% ( 239) 00:09:30.859 5721.797 - 5747.003: 19.0760% ( 237) 00:09:30.859 5747.003 - 5772.209: 20.2659% ( 230) 00:09:30.859 5772.209 - 5797.415: 21.4714% ( 233) 00:09:30.859 5797.415 - 5822.622: 22.7235% ( 242) 00:09:30.859 5822.622 - 5847.828: 24.0014% ( 247) 00:09:30.859 5847.828 - 5873.034: 25.2949% ( 250) 00:09:30.859 5873.034 - 5898.240: 26.5728% ( 247) 00:09:30.859 5898.240 - 5923.446: 27.8301% ( 243) 00:09:30.859 5923.446 - 5948.652: 29.0718% ( 240) 00:09:30.859 5948.652 - 5973.858: 30.3498% ( 247) 00:09:30.859 5973.858 - 5999.065: 31.6277% ( 247) 00:09:30.859 5999.065 - 6024.271: 32.9160% ( 249) 00:09:30.859 6024.271 - 6049.477: 34.1887% ( 246) 00:09:30.859 6049.477 - 6074.683: 35.4977% ( 253) 00:09:30.860 6074.683 - 6099.889: 36.8377% ( 259) 00:09:30.860 6099.889 - 6125.095: 38.1209% ( 248) 00:09:30.860 6125.095 - 6150.302: 39.4609% ( 259) 00:09:30.860 6150.302 - 6175.508: 40.7906% ( 257) 00:09:30.860 6175.508 - 6200.714: 42.1254% ( 258) 00:09:30.860 6200.714 - 6225.920: 43.4603% ( 258) 00:09:30.860 6225.920 - 6251.126: 44.7589% ( 251) 00:09:30.860 6251.126 - 6276.332: 46.0731% ( 254) 00:09:30.860 6276.332 - 6301.538: 47.3820% ( 253) 00:09:30.860 6301.538 - 6326.745: 48.7065% ( 256) 00:09:30.860 6326.745 - 6351.951: 50.0259% ( 255) 00:09:30.860 6351.951 - 6377.157: 51.3711% ( 260) 00:09:30.860 6377.157 - 6402.363: 52.6697% ( 251) 00:09:30.860 6402.363 - 6427.569: 53.9994% ( 257) 00:09:30.860 6427.569 - 6452.775: 55.3239% ( 256) 00:09:30.860 6452.775 - 6503.188: 57.9832% ( 514) 00:09:30.860 6503.188 - 6553.600: 60.6685% ( 519) 00:09:30.860 6553.600 - 6604.012: 63.2502% ( 499) 00:09:30.860 6604.012 - 6654.425: 65.9768% ( 527) 00:09:30.860 6654.425 - 6704.837: 68.5948% ( 506) 00:09:30.860 6704.837 - 6755.249: 71.3059% ( 524) 00:09:30.860 6755.249 - 6805.662: 73.9859% ( 518) 00:09:30.860 6805.662 - 6856.074: 76.5573% ( 497) 00:09:30.860 6856.074 - 6906.486: 78.8028% ( 434) 00:09:30.860 6906.486 - 6956.898: 80.6084% ( 349) 00:09:30.860 6956.898 - 7007.311: 82.2175% ( 311) 00:09:30.860 7007.311 - 7057.723: 83.6869% ( 284) 00:09:30.860 7057.723 - 7108.135: 84.9752% ( 249) 00:09:30.860 7108.135 - 7158.548: 86.1031% ( 218) 00:09:30.860 7158.548 - 7208.960: 87.0861% ( 190) 00:09:30.860 7208.960 - 7259.372: 87.7328% ( 125) 00:09:30.860 7259.372 - 7309.785: 88.2295% ( 96) 00:09:30.860 7309.785 - 7360.197: 88.6279% ( 77) 00:09:30.860 7360.197 - 7410.609: 88.9590% ( 64) 00:09:30.860 7410.609 - 7461.022: 89.2591% ( 58) 00:09:30.860 7461.022 - 7511.434: 89.4868% ( 44) 00:09:30.860 7511.434 - 7561.846: 89.6937% ( 40) 00:09:30.860 7561.846 - 7612.258: 89.9317% ( 46) 00:09:30.860 7612.258 - 7662.671: 90.1697% ( 46) 00:09:30.860 7662.671 - 7713.083: 90.4284% ( 50) 00:09:30.860 7713.083 - 7763.495: 90.6612% ( 45) 00:09:30.860 7763.495 - 7813.908: 90.9044% ( 47) 00:09:30.860 7813.908 - 7864.320: 91.1217% ( 42) 00:09:30.860 7864.320 - 7914.732: 91.3079% ( 36) 00:09:30.860 7914.732 - 7965.145: 91.4787% ( 33) 00:09:30.860 7965.145 - 8015.557: 91.6494% ( 33) 00:09:30.860 8015.557 - 8065.969: 91.8409% ( 37) 00:09:30.860 8065.969 - 8116.382: 92.0375% ( 38) 00:09:30.860 8116.382 - 8166.794: 92.1978% ( 31) 00:09:30.860 8166.794 - 8217.206: 92.3634% ( 32) 00:09:30.860 8217.206 - 8267.618: 92.5186% ( 30) 00:09:30.860 8267.618 - 8318.031: 92.6842% ( 32) 00:09:30.860 8318.031 - 8368.443: 92.8704% ( 36) 00:09:30.860 8368.443 - 8418.855: 93.0412% ( 33) 00:09:30.860 8418.855 - 8469.268: 93.2171% ( 34) 00:09:30.860 8469.268 - 8519.680: 93.3982% ( 35) 00:09:30.860 8519.680 - 8570.092: 93.5637% ( 32) 00:09:30.860 8570.092 - 8620.505: 93.7190% ( 30) 00:09:30.860 8620.505 - 8670.917: 93.8742% ( 30) 00:09:30.860 8670.917 - 8721.329: 94.0242% ( 29) 00:09:30.860 8721.329 - 8771.742: 94.1743% ( 29) 00:09:30.860 8771.742 - 8822.154: 94.3346% ( 31) 00:09:30.860 8822.154 - 8872.566: 94.4692% ( 26) 00:09:30.860 8872.566 - 8922.978: 94.5985% ( 25) 00:09:30.860 8922.978 - 8973.391: 94.7175% ( 23) 00:09:30.860 8973.391 - 9023.803: 94.8417% ( 24) 00:09:30.860 9023.803 - 9074.215: 94.9710% ( 25) 00:09:30.860 9074.215 - 9124.628: 95.1055% ( 26) 00:09:30.860 9124.628 - 9175.040: 95.2452% ( 27) 00:09:30.860 9175.040 - 9225.452: 95.3746% ( 25) 00:09:30.860 9225.452 - 9275.865: 95.5246% ( 29) 00:09:30.860 9275.865 - 9326.277: 95.6643% ( 27) 00:09:30.860 9326.277 - 9376.689: 95.7678% ( 20) 00:09:30.860 9376.689 - 9427.102: 95.8764% ( 21) 00:09:30.860 9427.102 - 9477.514: 95.9851% ( 21) 00:09:30.860 9477.514 - 9527.926: 96.0938% ( 21) 00:09:30.860 9527.926 - 9578.338: 96.2076% ( 22) 00:09:30.860 9578.338 - 9628.751: 96.3162% ( 21) 00:09:30.860 9628.751 - 9679.163: 96.4197% ( 20) 00:09:30.860 9679.163 - 9729.575: 96.5387% ( 23) 00:09:30.860 9729.575 - 9779.988: 96.6370% ( 19) 00:09:30.860 9779.988 - 9830.400: 96.7508% ( 22) 00:09:30.860 9830.400 - 9880.812: 96.8491% ( 19) 00:09:30.860 9880.812 - 9931.225: 96.9681% ( 23) 00:09:30.860 9931.225 - 9981.637: 97.0716% ( 20) 00:09:30.860 9981.637 - 10032.049: 97.1854% ( 22) 00:09:30.860 10032.049 - 10082.462: 97.2941% ( 21) 00:09:30.860 10082.462 - 10132.874: 97.3976% ( 20) 00:09:30.860 10132.874 - 10183.286: 97.5217% ( 24) 00:09:30.860 10183.286 - 10233.698: 97.6304% ( 21) 00:09:30.860 10233.698 - 10284.111: 97.7235% ( 18) 00:09:30.860 10284.111 - 10334.523: 97.8166% ( 18) 00:09:30.860 10334.523 - 10384.935: 97.9046% ( 17) 00:09:30.860 10384.935 - 10435.348: 97.9977% ( 18) 00:09:30.860 10435.348 - 10485.760: 98.0857% ( 17) 00:09:30.860 10485.760 - 10536.172: 98.1581% ( 14) 00:09:30.860 10536.172 - 10586.585: 98.2357% ( 15) 00:09:30.860 10586.585 - 10636.997: 98.2875% ( 10) 00:09:30.860 10636.997 - 10687.409: 98.3340% ( 9) 00:09:30.860 10687.409 - 10737.822: 98.3806% ( 9) 00:09:30.860 10737.822 - 10788.234: 98.4323% ( 10) 00:09:30.860 10788.234 - 10838.646: 98.4737% ( 8) 00:09:30.860 10838.646 - 10889.058: 98.5203% ( 9) 00:09:30.860 10889.058 - 10939.471: 98.5668% ( 9) 00:09:30.860 10939.471 - 10989.883: 98.6186% ( 10) 00:09:30.860 10989.883 - 11040.295: 98.6651% ( 9) 00:09:30.860 11040.295 - 11090.708: 98.7117% ( 9) 00:09:30.860 11090.708 - 11141.120: 98.7583% ( 9) 00:09:30.860 11141.120 - 11191.532: 98.7997% ( 8) 00:09:30.860 11191.532 - 11241.945: 98.8462% ( 9) 00:09:30.860 11241.945 - 11292.357: 98.8928% ( 9) 00:09:30.860 11292.357 - 11342.769: 98.9238% ( 6) 00:09:30.860 11342.769 - 11393.182: 98.9497% ( 5) 00:09:30.860 11393.182 - 11443.594: 98.9652% ( 3) 00:09:30.860 11443.594 - 11494.006: 98.9756% ( 2) 00:09:30.860 11494.006 - 11544.418: 98.9859% ( 2) 00:09:30.860 11544.418 - 11594.831: 98.9963% ( 2) 00:09:30.860 11594.831 - 11645.243: 99.0118% ( 3) 00:09:30.860 11645.243 - 11695.655: 99.0221% ( 2) 00:09:30.860 11695.655 - 11746.068: 99.0325% ( 2) 00:09:30.860 11746.068 - 11796.480: 99.0428% ( 2) 00:09:30.860 11796.480 - 11846.892: 99.0480% ( 1) 00:09:30.860 11846.892 - 11897.305: 99.0584% ( 2) 00:09:30.860 11897.305 - 11947.717: 99.0687% ( 2) 00:09:30.860 11947.717 - 11998.129: 99.0791% ( 2) 00:09:30.860 11998.129 - 12048.542: 99.0894% ( 2) 00:09:30.860 12048.542 - 12098.954: 99.1049% ( 3) 00:09:30.860 12098.954 - 12149.366: 99.1153% ( 2) 00:09:30.860 12149.366 - 12199.778: 99.1256% ( 2) 00:09:30.860 12199.778 - 12250.191: 99.1360% ( 2) 00:09:30.860 12250.191 - 12300.603: 99.1463% ( 2) 00:09:30.860 12300.603 - 12351.015: 99.1567% ( 2) 00:09:30.860 12351.015 - 12401.428: 99.1670% ( 2) 00:09:30.860 12401.428 - 12451.840: 99.1774% ( 2) 00:09:30.860 12451.840 - 12502.252: 99.1929% ( 3) 00:09:30.860 12502.252 - 12552.665: 99.2032% ( 2) 00:09:30.860 12552.665 - 12603.077: 99.2084% ( 1) 00:09:30.860 12603.077 - 12653.489: 99.2188% ( 2) 00:09:30.860 12653.489 - 12703.902: 99.2239% ( 1) 00:09:30.860 12703.902 - 12754.314: 99.2343% ( 2) 00:09:30.860 12754.314 - 12804.726: 99.2498% ( 3) 00:09:30.860 12804.726 - 12855.138: 99.2601% ( 2) 00:09:30.860 12855.138 - 12905.551: 99.2705% ( 2) 00:09:30.860 12905.551 - 13006.375: 99.2912% ( 4) 00:09:30.860 13006.375 - 13107.200: 99.3067% ( 3) 00:09:30.860 13107.200 - 13208.025: 99.3274% ( 4) 00:09:30.860 13208.025 - 13308.849: 99.3377% ( 2) 00:09:30.860 23492.135 - 23592.960: 99.3533% ( 3) 00:09:30.860 23592.960 - 23693.785: 99.3688% ( 3) 00:09:30.860 23693.785 - 23794.609: 99.3895% ( 4) 00:09:30.860 23794.609 - 23895.434: 99.4102% ( 4) 00:09:30.860 23895.434 - 23996.258: 99.4257% ( 3) 00:09:30.860 23996.258 - 24097.083: 99.4412% ( 3) 00:09:30.860 24097.083 - 24197.908: 99.4619% ( 4) 00:09:30.860 24197.908 - 24298.732: 99.4826% ( 4) 00:09:30.860 24298.732 - 24399.557: 99.5033% ( 4) 00:09:30.860 24399.557 - 24500.382: 99.5240% ( 4) 00:09:30.860 24500.382 - 24601.206: 99.5447% ( 4) 00:09:30.861 24601.206 - 24702.031: 99.5602% ( 3) 00:09:30.861 24702.031 - 24802.855: 99.5809% ( 4) 00:09:30.861 24802.855 - 24903.680: 99.6016% ( 4) 00:09:30.861 24903.680 - 25004.505: 99.6171% ( 3) 00:09:30.861 25004.505 - 25105.329: 99.6327% ( 3) 00:09:30.861 25105.329 - 25206.154: 99.6482% ( 3) 00:09:30.861 25206.154 - 25306.978: 99.6689% ( 4) 00:09:30.861 25306.978 - 25407.803: 99.6896% ( 4) 00:09:30.861 25407.803 - 25508.628: 99.7103% ( 4) 00:09:30.861 25508.628 - 25609.452: 99.7258% ( 3) 00:09:30.861 25609.452 - 25710.277: 99.7413% ( 3) 00:09:30.861 25710.277 - 25811.102: 99.7620% ( 4) 00:09:30.861 25811.102 - 26012.751: 99.8034% ( 8) 00:09:30.861 26012.751 - 26214.400: 99.8344% ( 6) 00:09:30.861 26214.400 - 26416.049: 99.8758% ( 8) 00:09:30.861 26416.049 - 26617.698: 99.9120% ( 7) 00:09:30.861 26617.698 - 26819.348: 99.9534% ( 8) 00:09:30.861 26819.348 - 27020.997: 99.9897% ( 7) 00:09:30.861 27020.997 - 27222.646: 100.0000% ( 2) 00:09:30.861 00:09:30.861 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:30.861 ============================================================================== 00:09:30.861 Range in us Cumulative IO count 00:09:30.861 5116.849 - 5142.055: 0.0103% ( 2) 00:09:30.861 5142.055 - 5167.262: 0.0310% ( 4) 00:09:30.861 5167.262 - 5192.468: 0.1345% ( 20) 00:09:30.861 5192.468 - 5217.674: 0.3622% ( 44) 00:09:30.861 5217.674 - 5242.880: 0.6985% ( 65) 00:09:30.861 5242.880 - 5268.086: 1.2521% ( 107) 00:09:30.861 5268.086 - 5293.292: 1.9764% ( 140) 00:09:30.861 5293.292 - 5318.498: 2.7318% ( 146) 00:09:30.861 5318.498 - 5343.705: 3.4147% ( 132) 00:09:30.861 5343.705 - 5368.911: 4.1391% ( 140) 00:09:30.861 5368.911 - 5394.117: 4.7185% ( 112) 00:09:30.861 5394.117 - 5419.323: 5.2618% ( 105) 00:09:30.861 5419.323 - 5444.529: 6.0482% ( 152) 00:09:30.861 5444.529 - 5469.735: 6.9278% ( 170) 00:09:30.861 5469.735 - 5494.942: 7.9470% ( 197) 00:09:30.861 5494.942 - 5520.148: 8.8887% ( 182) 00:09:30.861 5520.148 - 5545.354: 9.9131% ( 198) 00:09:30.861 5545.354 - 5570.560: 10.8702% ( 185) 00:09:30.861 5570.560 - 5595.766: 11.8843% ( 196) 00:09:30.861 5595.766 - 5620.972: 12.9656% ( 209) 00:09:30.861 5620.972 - 5646.178: 14.1660% ( 232) 00:09:30.861 5646.178 - 5671.385: 15.4180% ( 242) 00:09:30.861 5671.385 - 5696.591: 16.6701% ( 242) 00:09:30.861 5696.591 - 5721.797: 17.8860% ( 235) 00:09:30.861 5721.797 - 5747.003: 19.0760% ( 230) 00:09:30.861 5747.003 - 5772.209: 20.2815% ( 233) 00:09:30.861 5772.209 - 5797.415: 21.5387% ( 243) 00:09:30.861 5797.415 - 5822.622: 22.7959% ( 243) 00:09:30.861 5822.622 - 5847.828: 24.0221% ( 237) 00:09:30.861 5847.828 - 5873.034: 25.2949% ( 246) 00:09:30.861 5873.034 - 5898.240: 26.5625% ( 245) 00:09:30.861 5898.240 - 5923.446: 27.8818% ( 255) 00:09:30.861 5923.446 - 5948.652: 29.2012% ( 255) 00:09:30.861 5948.652 - 5973.858: 30.4377% ( 239) 00:09:30.861 5973.858 - 5999.065: 31.7156% ( 247) 00:09:30.861 5999.065 - 6024.271: 32.9936% ( 247) 00:09:30.861 6024.271 - 6049.477: 34.2715% ( 247) 00:09:30.861 6049.477 - 6074.683: 35.5132% ( 240) 00:09:30.861 6074.683 - 6099.889: 36.7912% ( 247) 00:09:30.861 6099.889 - 6125.095: 38.1053% ( 254) 00:09:30.861 6125.095 - 6150.302: 39.3936% ( 249) 00:09:30.861 6150.302 - 6175.508: 40.7285% ( 258) 00:09:30.861 6175.508 - 6200.714: 42.0426% ( 254) 00:09:30.861 6200.714 - 6225.920: 43.3464% ( 252) 00:09:30.861 6225.920 - 6251.126: 44.6502% ( 252) 00:09:30.861 6251.126 - 6276.332: 45.9489% ( 251) 00:09:30.861 6276.332 - 6301.538: 47.2268% ( 247) 00:09:30.861 6301.538 - 6326.745: 48.5565% ( 257) 00:09:30.861 6326.745 - 6351.951: 49.8810% ( 256) 00:09:30.861 6351.951 - 6377.157: 51.2107% ( 257) 00:09:30.861 6377.157 - 6402.363: 52.5766% ( 264) 00:09:30.861 6402.363 - 6427.569: 53.9321% ( 262) 00:09:30.861 6427.569 - 6452.775: 55.2514% ( 255) 00:09:30.861 6452.775 - 6503.188: 57.9367% ( 519) 00:09:30.861 6503.188 - 6553.600: 60.6012% ( 515) 00:09:30.861 6553.600 - 6604.012: 63.2192% ( 506) 00:09:30.861 6604.012 - 6654.425: 65.8061% ( 500) 00:09:30.861 6654.425 - 6704.837: 68.4758% ( 516) 00:09:30.861 6704.837 - 6755.249: 71.1403% ( 515) 00:09:30.861 6755.249 - 6805.662: 73.8152% ( 517) 00:09:30.861 6805.662 - 6856.074: 76.3918% ( 498) 00:09:30.861 6856.074 - 6906.486: 78.7045% ( 447) 00:09:30.861 6906.486 - 6956.898: 80.5101% ( 349) 00:09:30.861 6956.898 - 7007.311: 82.0985% ( 307) 00:09:30.861 7007.311 - 7057.723: 83.5368% ( 278) 00:09:30.861 7057.723 - 7108.135: 84.8613% ( 256) 00:09:30.861 7108.135 - 7158.548: 85.9789% ( 216) 00:09:30.861 7158.548 - 7208.960: 86.8481% ( 168) 00:09:30.861 7208.960 - 7259.372: 87.5466% ( 135) 00:09:30.861 7259.372 - 7309.785: 88.0226% ( 92) 00:09:30.861 7309.785 - 7360.197: 88.3744% ( 68) 00:09:30.861 7360.197 - 7410.609: 88.6693% ( 57) 00:09:30.861 7410.609 - 7461.022: 88.9332% ( 51) 00:09:30.861 7461.022 - 7511.434: 89.1608% ( 44) 00:09:30.861 7511.434 - 7561.846: 89.3729% ( 41) 00:09:30.861 7561.846 - 7612.258: 89.5799% ( 40) 00:09:30.861 7612.258 - 7662.671: 89.8024% ( 43) 00:09:30.861 7662.671 - 7713.083: 90.0197% ( 42) 00:09:30.861 7713.083 - 7763.495: 90.2318% ( 41) 00:09:30.861 7763.495 - 7813.908: 90.4491% ( 42) 00:09:30.861 7813.908 - 7864.320: 90.6457% ( 38) 00:09:30.861 7864.320 - 7914.732: 90.8113% ( 32) 00:09:30.861 7914.732 - 7965.145: 90.9716% ( 31) 00:09:30.861 7965.145 - 8015.557: 91.1217% ( 29) 00:09:30.861 8015.557 - 8065.969: 91.2717% ( 29) 00:09:30.861 8065.969 - 8116.382: 91.3959% ( 24) 00:09:30.861 8116.382 - 8166.794: 91.5201% ( 24) 00:09:30.861 8166.794 - 8217.206: 91.6701% ( 29) 00:09:30.861 8217.206 - 8267.618: 91.7995% ( 25) 00:09:30.861 8267.618 - 8318.031: 92.0219% ( 43) 00:09:30.861 8318.031 - 8368.443: 92.2030% ( 35) 00:09:30.861 8368.443 - 8418.855: 92.3634% ( 31) 00:09:30.861 8418.855 - 8469.268: 92.5238% ( 31) 00:09:30.861 8469.268 - 8519.680: 92.7256% ( 39) 00:09:30.861 8519.680 - 8570.092: 92.9377% ( 41) 00:09:30.861 8570.092 - 8620.505: 93.1654% ( 44) 00:09:30.861 8620.505 - 8670.917: 93.3878% ( 43) 00:09:30.861 8670.917 - 8721.329: 93.6155% ( 44) 00:09:30.861 8721.329 - 8771.742: 93.8173% ( 39) 00:09:30.861 8771.742 - 8822.154: 93.9880% ( 33) 00:09:30.861 8822.154 - 8872.566: 94.1794% ( 37) 00:09:30.861 8872.566 - 8922.978: 94.3398% ( 31) 00:09:30.861 8922.978 - 8973.391: 94.5261% ( 36) 00:09:30.861 8973.391 - 9023.803: 94.6968% ( 33) 00:09:30.861 9023.803 - 9074.215: 94.8675% ( 33) 00:09:30.861 9074.215 - 9124.628: 95.0279% ( 31) 00:09:30.861 9124.628 - 9175.040: 95.1832% ( 30) 00:09:30.861 9175.040 - 9225.452: 95.3384% ( 30) 00:09:30.861 9225.452 - 9275.865: 95.4936% ( 30) 00:09:30.861 9275.865 - 9326.277: 95.6281% ( 26) 00:09:30.861 9326.277 - 9376.689: 95.7678% ( 27) 00:09:30.861 9376.689 - 9427.102: 95.9230% ( 30) 00:09:30.861 9427.102 - 9477.514: 96.0834% ( 31) 00:09:30.861 9477.514 - 9527.926: 96.2283% ( 28) 00:09:30.861 9527.926 - 9578.338: 96.3731% ( 28) 00:09:30.861 9578.338 - 9628.751: 96.5232% ( 29) 00:09:30.861 9628.751 - 9679.163: 96.6887% ( 32) 00:09:30.861 9679.163 - 9729.575: 96.8388% ( 29) 00:09:30.861 9729.575 - 9779.988: 96.9940% ( 30) 00:09:30.861 9779.988 - 9830.400: 97.1544% ( 31) 00:09:30.861 9830.400 - 9880.812: 97.3044% ( 29) 00:09:30.861 9880.812 - 9931.225: 97.4596% ( 30) 00:09:30.861 9931.225 - 9981.637: 97.5890% ( 25) 00:09:30.861 9981.637 - 10032.049: 97.7235% ( 26) 00:09:30.861 10032.049 - 10082.462: 97.8425% ( 23) 00:09:30.861 10082.462 - 10132.874: 97.9719% ( 25) 00:09:30.861 10132.874 - 10183.286: 98.0960% ( 24) 00:09:30.861 10183.286 - 10233.698: 98.2047% ( 21) 00:09:30.861 10233.698 - 10284.111: 98.3030% ( 19) 00:09:30.861 10284.111 - 10334.523: 98.3599% ( 11) 00:09:30.861 10334.523 - 10384.935: 98.4168% ( 11) 00:09:30.861 10384.935 - 10435.348: 98.4737% ( 11) 00:09:30.861 10435.348 - 10485.760: 98.5255% ( 10) 00:09:30.861 10485.760 - 10536.172: 98.5772% ( 10) 00:09:30.861 10536.172 - 10586.585: 98.6341% ( 11) 00:09:30.861 10586.585 - 10636.997: 98.6858% ( 10) 00:09:30.861 10636.997 - 10687.409: 98.7221% ( 7) 00:09:30.861 10687.409 - 10737.822: 98.7635% ( 8) 00:09:30.862 10737.822 - 10788.234: 98.8048% ( 8) 00:09:30.862 10788.234 - 10838.646: 98.8411% ( 7) 00:09:30.862 10838.646 - 10889.058: 98.8773% ( 7) 00:09:30.862 10889.058 - 10939.471: 98.9187% ( 8) 00:09:30.862 10939.471 - 10989.883: 98.9549% ( 7) 00:09:30.862 10989.883 - 11040.295: 99.0014% ( 9) 00:09:30.862 11040.295 - 11090.708: 99.0428% ( 8) 00:09:30.862 11090.708 - 11141.120: 99.0791% ( 7) 00:09:30.862 11141.120 - 11191.532: 99.1153% ( 7) 00:09:30.862 11191.532 - 11241.945: 99.1463% ( 6) 00:09:30.862 11241.945 - 11292.357: 99.1670% ( 4) 00:09:30.862 11292.357 - 11342.769: 99.1877% ( 4) 00:09:30.862 11342.769 - 11393.182: 99.2084% ( 4) 00:09:30.862 11393.182 - 11443.594: 99.2291% ( 4) 00:09:30.862 11443.594 - 11494.006: 99.2498% ( 4) 00:09:30.862 11494.006 - 11544.418: 99.2705% ( 4) 00:09:30.862 11544.418 - 11594.831: 99.2808% ( 2) 00:09:30.862 11594.831 - 11645.243: 99.2912% ( 2) 00:09:30.862 11645.243 - 11695.655: 99.3015% ( 2) 00:09:30.862 11695.655 - 11746.068: 99.3119% ( 2) 00:09:30.862 11746.068 - 11796.480: 99.3222% ( 2) 00:09:30.862 11796.480 - 11846.892: 99.3274% ( 1) 00:09:30.862 11846.892 - 11897.305: 99.3377% ( 2) 00:09:30.862 22887.188 - 22988.012: 99.3429% ( 1) 00:09:30.862 22988.012 - 23088.837: 99.3584% ( 3) 00:09:30.862 23088.837 - 23189.662: 99.3791% ( 4) 00:09:30.862 23189.662 - 23290.486: 99.3998% ( 4) 00:09:30.862 23290.486 - 23391.311: 99.4205% ( 4) 00:09:30.862 23391.311 - 23492.135: 99.4361% ( 3) 00:09:30.862 23492.135 - 23592.960: 99.4567% ( 4) 00:09:30.862 23592.960 - 23693.785: 99.4774% ( 4) 00:09:30.862 23693.785 - 23794.609: 99.4981% ( 4) 00:09:30.862 23794.609 - 23895.434: 99.5137% ( 3) 00:09:30.862 23895.434 - 23996.258: 99.5292% ( 3) 00:09:30.862 23996.258 - 24097.083: 99.5499% ( 4) 00:09:30.862 24097.083 - 24197.908: 99.5706% ( 4) 00:09:30.862 24197.908 - 24298.732: 99.5913% ( 4) 00:09:30.862 24298.732 - 24399.557: 99.6068% ( 3) 00:09:30.862 24399.557 - 24500.382: 99.6275% ( 4) 00:09:30.862 24500.382 - 24601.206: 99.6482% ( 4) 00:09:30.862 24601.206 - 24702.031: 99.6689% ( 4) 00:09:30.862 24702.031 - 24802.855: 99.6896% ( 4) 00:09:30.862 24802.855 - 24903.680: 99.7103% ( 4) 00:09:30.862 24903.680 - 25004.505: 99.7258% ( 3) 00:09:30.862 25004.505 - 25105.329: 99.7465% ( 4) 00:09:30.862 25105.329 - 25206.154: 99.7672% ( 4) 00:09:30.862 25206.154 - 25306.978: 99.7879% ( 4) 00:09:30.862 25306.978 - 25407.803: 99.8034% ( 3) 00:09:30.862 25407.803 - 25508.628: 99.8241% ( 4) 00:09:30.862 25508.628 - 25609.452: 99.8448% ( 4) 00:09:30.862 25609.452 - 25710.277: 99.8603% ( 3) 00:09:30.862 25710.277 - 25811.102: 99.8810% ( 4) 00:09:30.862 25811.102 - 26012.751: 99.9224% ( 8) 00:09:30.862 26012.751 - 26214.400: 99.9586% ( 7) 00:09:30.862 26214.400 - 26416.049: 100.0000% ( 8) 00:09:30.862 00:09:30.862 23:43:01 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:32.236 Initializing NVMe Controllers 00:09:32.236 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:32.236 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:32.236 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:32.236 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:32.236 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:32.236 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:32.236 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:32.236 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:32.236 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:32.236 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:32.236 Initialization complete. Launching workers. 00:09:32.236 ======================================================== 00:09:32.236 Latency(us) 00:09:32.236 Device Information : IOPS MiB/s Average min max 00:09:32.237 PCIE (0000:00:06.0) NSID 1 from core 0: 18883.93 221.30 6775.50 5106.34 29998.83 00:09:32.237 PCIE (0000:00:07.0) NSID 1 from core 0: 18883.93 221.30 6770.49 5308.30 28792.95 00:09:32.237 PCIE (0000:00:09.0) NSID 1 from core 0: 18883.93 221.30 6765.31 5160.55 27566.92 00:09:32.237 PCIE (0000:00:08.0) NSID 1 from core 0: 18883.93 221.30 6760.10 5229.23 26348.93 00:09:32.237 PCIE (0000:00:08.0) NSID 2 from core 0: 18883.93 221.30 6754.94 5214.25 25190.57 00:09:32.237 PCIE (0000:00:08.0) NSID 3 from core 0: 19011.52 222.79 6704.27 5270.49 17533.81 00:09:32.237 ======================================================== 00:09:32.237 Total : 113431.18 1329.27 6755.04 5106.34 29998.83 00:09:32.237 00:09:32.237 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:32.237 ================================================================================= 00:09:32.237 1.00000% : 5444.529us 00:09:32.237 10.00000% : 5822.622us 00:09:32.237 25.00000% : 6074.683us 00:09:32.237 50.00000% : 6503.188us 00:09:32.237 75.00000% : 7007.311us 00:09:32.237 90.00000% : 7511.434us 00:09:32.237 95.00000% : 8267.618us 00:09:32.237 98.00000% : 9931.225us 00:09:32.237 99.00000% : 11292.357us 00:09:32.237 99.50000% : 27625.945us 00:09:32.237 99.90000% : 29642.437us 00:09:32.237 99.99000% : 30045.735us 00:09:32.237 99.99900% : 30045.735us 00:09:32.237 99.99990% : 30045.735us 00:09:32.237 99.99999% : 30045.735us 00:09:32.237 00:09:32.237 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:32.237 ================================================================================= 00:09:32.237 1.00000% : 5747.003us 00:09:32.237 10.00000% : 6074.683us 00:09:32.237 25.00000% : 6251.126us 00:09:32.237 50.00000% : 6503.188us 00:09:32.237 75.00000% : 6755.249us 00:09:32.237 90.00000% : 7158.548us 00:09:32.237 95.00000% : 8116.382us 00:09:32.237 98.00000% : 10485.760us 00:09:32.237 99.00000% : 11746.068us 00:09:32.237 99.50000% : 26416.049us 00:09:32.237 99.90000% : 28432.542us 00:09:32.237 99.99000% : 28835.840us 00:09:32.237 99.99900% : 28835.840us 00:09:32.237 99.99990% : 28835.840us 00:09:32.237 99.99999% : 28835.840us 00:09:32.237 00:09:32.237 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:32.237 ================================================================================= 00:09:32.237 1.00000% : 5646.178us 00:09:32.237 10.00000% : 6049.477us 00:09:32.237 25.00000% : 6251.126us 00:09:32.237 50.00000% : 6503.188us 00:09:32.237 75.00000% : 6805.662us 00:09:32.237 90.00000% : 7208.960us 00:09:32.237 95.00000% : 7763.495us 00:09:32.237 98.00000% : 10687.409us 00:09:32.237 99.00000% : 12653.489us 00:09:32.237 99.50000% : 25306.978us 00:09:32.237 99.90000% : 27222.646us 00:09:32.237 99.99000% : 27625.945us 00:09:32.237 99.99900% : 27625.945us 00:09:32.237 99.99990% : 27625.945us 00:09:32.237 99.99999% : 27625.945us 00:09:32.237 00:09:32.237 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:32.237 ================================================================================= 00:09:32.237 1.00000% : 5772.209us 00:09:32.237 10.00000% : 6074.683us 00:09:32.237 25.00000% : 6276.332us 00:09:32.237 50.00000% : 6503.188us 00:09:32.237 75.00000% : 6805.662us 00:09:32.237 90.00000% : 7158.548us 00:09:32.237 95.00000% : 7612.258us 00:09:32.237 98.00000% : 10838.646us 00:09:32.237 99.00000% : 14014.622us 00:09:32.237 99.50000% : 23996.258us 00:09:32.237 99.90000% : 26012.751us 00:09:32.237 99.99000% : 26416.049us 00:09:32.237 99.99900% : 26416.049us 00:09:32.237 99.99990% : 26416.049us 00:09:32.237 99.99999% : 26416.049us 00:09:32.237 00:09:32.237 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:32.237 ================================================================================= 00:09:32.237 1.00000% : 5747.003us 00:09:32.237 10.00000% : 6074.683us 00:09:32.237 25.00000% : 6276.332us 00:09:32.237 50.00000% : 6503.188us 00:09:32.237 75.00000% : 6805.662us 00:09:32.237 90.00000% : 7158.548us 00:09:32.237 95.00000% : 7763.495us 00:09:32.237 98.00000% : 10384.935us 00:09:32.237 99.00000% : 13208.025us 00:09:32.237 99.50000% : 22786.363us 00:09:32.237 99.90000% : 24802.855us 00:09:32.237 99.99000% : 25206.154us 00:09:32.237 99.99900% : 25206.154us 00:09:32.237 99.99990% : 25206.154us 00:09:32.237 99.99999% : 25206.154us 00:09:32.237 00:09:32.237 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:32.237 ================================================================================= 00:09:32.237 1.00000% : 5721.797us 00:09:32.237 10.00000% : 6074.683us 00:09:32.237 25.00000% : 6276.332us 00:09:32.237 50.00000% : 6503.188us 00:09:32.237 75.00000% : 6805.662us 00:09:32.237 90.00000% : 7208.960us 00:09:32.237 95.00000% : 8267.618us 00:09:32.237 98.00000% : 9527.926us 00:09:32.237 99.00000% : 12048.542us 00:09:32.237 99.50000% : 15224.517us 00:09:32.237 99.90000% : 17140.185us 00:09:32.237 99.99000% : 17543.483us 00:09:32.237 99.99900% : 17543.483us 00:09:32.237 99.99990% : 17543.483us 00:09:32.237 99.99999% : 17543.483us 00:09:32.237 00:09:32.237 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:32.237 ============================================================================== 00:09:32.237 Range in us Cumulative IO count 00:09:32.237 5091.643 - 5116.849: 0.0053% ( 1) 00:09:32.237 5116.849 - 5142.055: 0.0158% ( 2) 00:09:32.237 5142.055 - 5167.262: 0.0475% ( 6) 00:09:32.237 5167.262 - 5192.468: 0.0897% ( 8) 00:09:32.237 5192.468 - 5217.674: 0.1900% ( 19) 00:09:32.237 5217.674 - 5242.880: 0.2587% ( 13) 00:09:32.237 5242.880 - 5268.086: 0.3695% ( 21) 00:09:32.237 5268.086 - 5293.292: 0.4276% ( 11) 00:09:32.237 5293.292 - 5318.498: 0.5015% ( 14) 00:09:32.237 5318.498 - 5343.705: 0.5859% ( 16) 00:09:32.237 5343.705 - 5368.911: 0.6440% ( 11) 00:09:32.237 5368.911 - 5394.117: 0.7601% ( 22) 00:09:32.237 5394.117 - 5419.323: 0.8710% ( 21) 00:09:32.237 5419.323 - 5444.529: 1.0874% ( 41) 00:09:32.237 5444.529 - 5469.735: 1.2880% ( 38) 00:09:32.237 5469.735 - 5494.942: 1.6047% ( 60) 00:09:32.237 5494.942 - 5520.148: 1.9954% ( 74) 00:09:32.237 5520.148 - 5545.354: 2.3121% ( 60) 00:09:32.237 5545.354 - 5570.560: 2.6130% ( 57) 00:09:32.237 5570.560 - 5595.766: 2.9561% ( 65) 00:09:32.237 5595.766 - 5620.972: 3.3837% ( 81) 00:09:32.237 5620.972 - 5646.178: 3.9326% ( 104) 00:09:32.237 5646.178 - 5671.385: 4.5291% ( 113) 00:09:32.237 5671.385 - 5696.591: 5.0992% ( 108) 00:09:32.237 5696.591 - 5721.797: 6.1708% ( 203) 00:09:32.237 5721.797 - 5747.003: 7.1738% ( 190) 00:09:32.237 5747.003 - 5772.209: 8.2242% ( 199) 00:09:32.237 5772.209 - 5797.415: 9.2747% ( 199) 00:09:32.237 5797.415 - 5822.622: 10.5733% ( 246) 00:09:32.237 5822.622 - 5847.828: 11.9193% ( 255) 00:09:32.237 5847.828 - 5873.034: 13.4291% ( 286) 00:09:32.237 5873.034 - 5898.240: 14.9229% ( 283) 00:09:32.237 5898.240 - 5923.446: 16.2426% ( 250) 00:09:32.237 5923.446 - 5948.652: 17.8843% ( 311) 00:09:32.237 5948.652 - 5973.858: 19.5365% ( 313) 00:09:32.237 5973.858 - 5999.065: 21.1993% ( 315) 00:09:32.237 5999.065 - 6024.271: 22.9571% ( 333) 00:09:32.237 6024.271 - 6049.477: 24.3507% ( 264) 00:09:32.237 6049.477 - 6074.683: 25.8763% ( 289) 00:09:32.237 6074.683 - 6099.889: 27.7449% ( 354) 00:09:32.237 6099.889 - 6125.095: 29.3127% ( 297) 00:09:32.237 6125.095 - 6150.302: 30.5954% ( 243) 00:09:32.237 6150.302 - 6175.508: 32.0788% ( 281) 00:09:32.237 6175.508 - 6200.714: 33.7099% ( 309) 00:09:32.237 6200.714 - 6225.920: 35.2038% ( 283) 00:09:32.237 6225.920 - 6251.126: 36.7135% ( 286) 00:09:32.237 6251.126 - 6276.332: 38.1968% ( 281) 00:09:32.237 6276.332 - 6301.538: 39.6220% ( 270) 00:09:32.237 6301.538 - 6326.745: 41.0579% ( 272) 00:09:32.237 6326.745 - 6351.951: 42.3986% ( 254) 00:09:32.237 6351.951 - 6377.157: 43.7764% ( 261) 00:09:32.237 6377.157 - 6402.363: 45.1541% ( 261) 00:09:32.237 6402.363 - 6427.569: 46.3735% ( 231) 00:09:32.238 6427.569 - 6452.775: 47.7407% ( 259) 00:09:32.238 6452.775 - 6503.188: 50.4856% ( 520) 00:09:32.238 6503.188 - 6553.600: 53.0617% ( 488) 00:09:32.238 6553.600 - 6604.012: 55.7855% ( 516) 00:09:32.238 6604.012 - 6654.425: 58.6307% ( 539) 00:09:32.238 6654.425 - 6704.837: 61.3123% ( 508) 00:09:32.238 6704.837 - 6755.249: 64.1153% ( 531) 00:09:32.238 6755.249 - 6805.662: 66.7019% ( 490) 00:09:32.238 6805.662 - 6856.074: 69.0509% ( 445) 00:09:32.238 6856.074 - 6906.486: 71.5741% ( 478) 00:09:32.238 6906.486 - 6956.898: 73.8862% ( 438) 00:09:32.238 6956.898 - 7007.311: 76.2088% ( 440) 00:09:32.238 7007.311 - 7057.723: 78.2570% ( 388) 00:09:32.238 7057.723 - 7108.135: 80.3473% ( 396) 00:09:32.238 7108.135 - 7158.548: 82.2002% ( 351) 00:09:32.238 7158.548 - 7208.960: 83.6993% ( 284) 00:09:32.238 7208.960 - 7259.372: 85.1035% ( 266) 00:09:32.238 7259.372 - 7309.785: 86.2965% ( 226) 00:09:32.238 7309.785 - 7360.197: 87.5317% ( 234) 00:09:32.238 7360.197 - 7410.609: 88.7458% ( 230) 00:09:32.238 7410.609 - 7461.022: 89.7646% ( 193) 00:09:32.238 7461.022 - 7511.434: 90.7728% ( 191) 00:09:32.238 7511.434 - 7561.846: 91.6596% ( 168) 00:09:32.238 7561.846 - 7612.258: 92.4356% ( 147) 00:09:32.238 7612.258 - 7662.671: 93.1746% ( 140) 00:09:32.238 7662.671 - 7713.083: 93.5177% ( 65) 00:09:32.238 7713.083 - 7763.495: 93.7394% ( 42) 00:09:32.238 7763.495 - 7813.908: 93.8714% ( 25) 00:09:32.238 7813.908 - 7864.320: 94.0034% ( 25) 00:09:32.238 7864.320 - 7914.732: 94.1195% ( 22) 00:09:32.238 7914.732 - 7965.145: 94.2040% ( 16) 00:09:32.238 7965.145 - 8015.557: 94.3359% ( 25) 00:09:32.238 8015.557 - 8065.969: 94.4468% ( 21) 00:09:32.238 8065.969 - 8116.382: 94.5788% ( 25) 00:09:32.238 8116.382 - 8166.794: 94.8110% ( 44) 00:09:32.238 8166.794 - 8217.206: 94.9430% ( 25) 00:09:32.238 8217.206 - 8267.618: 95.0222% ( 15) 00:09:32.238 8267.618 - 8318.031: 95.1119% ( 17) 00:09:32.238 8318.031 - 8368.443: 95.1911% ( 15) 00:09:32.238 8368.443 - 8418.855: 95.2597% ( 13) 00:09:32.238 8418.855 - 8469.268: 95.3442% ( 16) 00:09:32.238 8469.268 - 8519.680: 95.4022% ( 11) 00:09:32.238 8519.680 - 8570.092: 95.4867% ( 16) 00:09:32.238 8570.092 - 8620.505: 95.5236% ( 7) 00:09:32.238 8620.505 - 8670.917: 95.6187% ( 18) 00:09:32.238 8670.917 - 8721.329: 95.7031% ( 16) 00:09:32.238 8721.329 - 8771.742: 95.7929% ( 17) 00:09:32.238 8771.742 - 8822.154: 95.9037% ( 21) 00:09:32.238 8822.154 - 8872.566: 95.9829% ( 15) 00:09:32.238 8872.566 - 8922.978: 96.0990% ( 22) 00:09:32.238 8922.978 - 8973.391: 96.2363% ( 26) 00:09:32.238 8973.391 - 9023.803: 96.3366% ( 19) 00:09:32.238 9023.803 - 9074.215: 96.4738% ( 26) 00:09:32.238 9074.215 - 9124.628: 96.5952% ( 23) 00:09:32.238 9124.628 - 9175.040: 96.6850% ( 17) 00:09:32.238 9175.040 - 9225.452: 96.7853% ( 19) 00:09:32.238 9225.452 - 9275.865: 96.8539% ( 13) 00:09:32.238 9275.865 - 9326.277: 96.9436% ( 17) 00:09:32.238 9326.277 - 9376.689: 97.0228% ( 15) 00:09:32.238 9376.689 - 9427.102: 97.0914% ( 13) 00:09:32.238 9427.102 - 9477.514: 97.1864% ( 18) 00:09:32.238 9477.514 - 9527.926: 97.2551% ( 13) 00:09:32.238 9527.926 - 9578.338: 97.3395% ( 16) 00:09:32.238 9578.338 - 9628.751: 97.4504% ( 21) 00:09:32.238 9628.751 - 9679.163: 97.5296% ( 15) 00:09:32.238 9679.163 - 9729.575: 97.6299% ( 19) 00:09:32.238 9729.575 - 9779.988: 97.7090% ( 15) 00:09:32.238 9779.988 - 9830.400: 97.7829% ( 14) 00:09:32.238 9830.400 - 9880.812: 97.8780% ( 18) 00:09:32.238 9880.812 - 9931.225: 98.0046% ( 24) 00:09:32.238 9931.225 - 9981.637: 98.0891% ( 16) 00:09:32.238 9981.637 - 10032.049: 98.1366% ( 9) 00:09:32.238 10032.049 - 10082.462: 98.2000% ( 12) 00:09:32.238 10082.462 - 10132.874: 98.2527% ( 10) 00:09:32.238 10132.874 - 10183.286: 98.3055% ( 10) 00:09:32.238 10183.286 - 10233.698: 98.3530% ( 9) 00:09:32.238 10233.698 - 10284.111: 98.4005% ( 9) 00:09:32.238 10284.111 - 10334.523: 98.4533% ( 10) 00:09:32.238 10334.523 - 10384.935: 98.4903% ( 7) 00:09:32.238 10384.935 - 10435.348: 98.5378% ( 9) 00:09:32.238 10435.348 - 10485.760: 98.5695% ( 6) 00:09:32.238 10485.760 - 10536.172: 98.6223% ( 10) 00:09:32.238 10536.172 - 10586.585: 98.6486% ( 5) 00:09:32.238 10586.585 - 10636.997: 98.6856% ( 7) 00:09:32.238 10636.997 - 10687.409: 98.7120% ( 5) 00:09:32.238 10687.409 - 10737.822: 98.7489% ( 7) 00:09:32.238 10737.822 - 10788.234: 98.8070% ( 11) 00:09:32.238 10788.234 - 10838.646: 98.8440% ( 7) 00:09:32.238 10838.646 - 10889.058: 98.8651% ( 4) 00:09:32.238 10889.058 - 10939.471: 98.8967% ( 6) 00:09:32.238 10939.471 - 10989.883: 98.9126% ( 3) 00:09:32.238 10989.883 - 11040.295: 98.9337% ( 4) 00:09:32.238 11040.295 - 11090.708: 98.9495% ( 3) 00:09:32.238 11090.708 - 11141.120: 98.9548% ( 1) 00:09:32.238 11141.120 - 11191.532: 98.9601% ( 1) 00:09:32.238 11191.532 - 11241.945: 98.9759% ( 3) 00:09:32.238 11241.945 - 11292.357: 99.0023% ( 5) 00:09:32.238 11292.357 - 11342.769: 99.0129% ( 2) 00:09:32.238 11342.769 - 11393.182: 99.0287% ( 3) 00:09:32.238 11393.182 - 11443.594: 99.0446% ( 3) 00:09:32.238 11443.594 - 11494.006: 99.0498% ( 1) 00:09:32.238 11494.006 - 11544.418: 99.0657% ( 3) 00:09:32.238 11544.418 - 11594.831: 99.0709% ( 1) 00:09:32.238 11594.831 - 11645.243: 99.0868% ( 3) 00:09:32.238 11645.243 - 11695.655: 99.1079% ( 4) 00:09:32.238 11695.655 - 11746.068: 99.1343% ( 5) 00:09:32.238 11746.068 - 11796.480: 99.1501% ( 3) 00:09:32.238 11796.480 - 11846.892: 99.1712% ( 4) 00:09:32.238 11846.892 - 11897.305: 99.1871% ( 3) 00:09:32.238 11897.305 - 11947.717: 99.2029% ( 3) 00:09:32.238 11947.717 - 11998.129: 99.2240% ( 4) 00:09:32.238 11998.129 - 12048.542: 99.2293% ( 1) 00:09:32.238 12048.542 - 12098.954: 99.2346% ( 1) 00:09:32.238 12098.954 - 12149.366: 99.2399% ( 1) 00:09:32.238 12149.366 - 12199.778: 99.2557% ( 3) 00:09:32.238 12199.778 - 12250.191: 99.2663% ( 2) 00:09:32.238 12250.191 - 12300.603: 99.2768% ( 2) 00:09:32.238 12300.603 - 12351.015: 99.2927% ( 3) 00:09:32.238 12351.015 - 12401.428: 99.2979% ( 1) 00:09:32.238 12401.428 - 12451.840: 99.3032% ( 1) 00:09:32.238 12451.840 - 12502.252: 99.3190% ( 3) 00:09:32.238 12804.726 - 12855.138: 99.3243% ( 1) 00:09:32.238 26617.698 - 26819.348: 99.3666% ( 8) 00:09:32.238 26819.348 - 27020.997: 99.4088% ( 8) 00:09:32.238 27020.997 - 27222.646: 99.4510% ( 8) 00:09:32.238 27222.646 - 27424.295: 99.4932% ( 8) 00:09:32.238 27424.295 - 27625.945: 99.5355% ( 8) 00:09:32.238 27625.945 - 27827.594: 99.5777% ( 8) 00:09:32.238 27827.594 - 28029.243: 99.6094% ( 6) 00:09:32.238 28029.243 - 28230.892: 99.6516% ( 8) 00:09:32.238 28230.892 - 28432.542: 99.6938% ( 8) 00:09:32.238 28432.542 - 28634.191: 99.7308% ( 7) 00:09:32.238 28634.191 - 28835.840: 99.7730% ( 8) 00:09:32.238 28835.840 - 29037.489: 99.8152% ( 8) 00:09:32.238 29037.489 - 29239.138: 99.8575% ( 8) 00:09:32.238 29239.138 - 29440.788: 99.8944% ( 7) 00:09:32.238 29440.788 - 29642.437: 99.9314% ( 7) 00:09:32.238 29642.437 - 29844.086: 99.9736% ( 8) 00:09:32.238 29844.086 - 30045.735: 100.0000% ( 5) 00:09:32.238 00:09:32.238 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:32.238 ============================================================================== 00:09:32.238 Range in us Cumulative IO count 00:09:32.238 5293.292 - 5318.498: 0.0106% ( 2) 00:09:32.238 5318.498 - 5343.705: 0.0158% ( 1) 00:09:32.238 5343.705 - 5368.911: 0.0211% ( 1) 00:09:32.238 5394.117 - 5419.323: 0.0317% ( 2) 00:09:32.238 5419.323 - 5444.529: 0.0370% ( 1) 00:09:32.238 5444.529 - 5469.735: 0.0581% ( 4) 00:09:32.238 5469.735 - 5494.942: 0.0686% ( 2) 00:09:32.238 5494.942 - 5520.148: 0.0897% ( 4) 00:09:32.238 5520.148 - 5545.354: 0.1214% ( 6) 00:09:32.238 5545.354 - 5570.560: 0.1689% ( 9) 00:09:32.238 5570.560 - 5595.766: 0.2692% ( 19) 00:09:32.238 5595.766 - 5620.972: 0.3484% ( 15) 00:09:32.239 5620.972 - 5646.178: 0.4487% ( 19) 00:09:32.239 5646.178 - 5671.385: 0.5332% ( 16) 00:09:32.239 5671.385 - 5696.591: 0.6546% ( 23) 00:09:32.239 5696.591 - 5721.797: 0.8182% ( 31) 00:09:32.239 5721.797 - 5747.003: 1.0346% ( 41) 00:09:32.239 5747.003 - 5772.209: 1.2405% ( 39) 00:09:32.239 5772.209 - 5797.415: 1.4780% ( 45) 00:09:32.239 5797.415 - 5822.622: 1.8053% ( 62) 00:09:32.239 5822.622 - 5847.828: 2.6182% ( 154) 00:09:32.239 5847.828 - 5873.034: 2.9614% ( 65) 00:09:32.239 5873.034 - 5898.240: 3.3414% ( 72) 00:09:32.239 5898.240 - 5923.446: 3.9115% ( 108) 00:09:32.239 5923.446 - 5948.652: 4.7720% ( 163) 00:09:32.239 5948.652 - 5973.858: 5.7221% ( 180) 00:09:32.239 5973.858 - 5999.065: 6.7726% ( 199) 00:09:32.239 5999.065 - 6024.271: 7.9075% ( 215) 00:09:32.239 6024.271 - 6049.477: 9.1691% ( 239) 00:09:32.239 6049.477 - 6074.683: 10.7211% ( 294) 00:09:32.239 6074.683 - 6099.889: 12.1410% ( 269) 00:09:32.239 6099.889 - 6125.095: 13.9886% ( 350) 00:09:32.239 6125.095 - 6150.302: 16.0737% ( 395) 00:09:32.239 6150.302 - 6175.508: 18.7817% ( 513) 00:09:32.239 6175.508 - 6200.714: 21.0357% ( 427) 00:09:32.239 6200.714 - 6225.920: 23.3055% ( 430) 00:09:32.239 6225.920 - 6251.126: 25.4329% ( 403) 00:09:32.239 6251.126 - 6276.332: 27.7027% ( 430) 00:09:32.239 6276.332 - 6301.538: 30.1943% ( 472) 00:09:32.239 6301.538 - 6326.745: 32.9181% ( 516) 00:09:32.239 6326.745 - 6351.951: 35.8372% ( 553) 00:09:32.239 6351.951 - 6377.157: 39.0150% ( 602) 00:09:32.239 6377.157 - 6402.363: 42.6256% ( 684) 00:09:32.239 6402.363 - 6427.569: 45.5025% ( 545) 00:09:32.239 6427.569 - 6452.775: 48.1261% ( 497) 00:09:32.239 6452.775 - 6503.188: 54.9673% ( 1296) 00:09:32.239 6503.188 - 6553.600: 61.7240% ( 1280) 00:09:32.239 6553.600 - 6604.012: 67.3828% ( 1072) 00:09:32.239 6604.012 - 6654.425: 71.1307% ( 710) 00:09:32.239 6654.425 - 6704.837: 73.9759% ( 539) 00:09:32.239 6704.837 - 6755.249: 76.1983% ( 421) 00:09:32.239 6755.249 - 6805.662: 78.2095% ( 381) 00:09:32.239 6805.662 - 6856.074: 80.2998% ( 396) 00:09:32.239 6856.074 - 6906.486: 82.5961% ( 435) 00:09:32.239 6906.486 - 6956.898: 84.4225% ( 346) 00:09:32.239 6956.898 - 7007.311: 86.0167% ( 302) 00:09:32.239 7007.311 - 7057.723: 87.4314% ( 268) 00:09:32.239 7057.723 - 7108.135: 88.8989% ( 278) 00:09:32.239 7108.135 - 7158.548: 90.1235% ( 232) 00:09:32.239 7158.548 - 7208.960: 91.0156% ( 169) 00:09:32.239 7208.960 - 7259.372: 91.7494% ( 139) 00:09:32.239 7259.372 - 7309.785: 92.2720% ( 99) 00:09:32.239 7309.785 - 7360.197: 92.6520% ( 72) 00:09:32.239 7360.197 - 7410.609: 92.9635% ( 59) 00:09:32.239 7410.609 - 7461.022: 93.3224% ( 68) 00:09:32.239 7461.022 - 7511.434: 93.5811% ( 49) 00:09:32.239 7511.434 - 7561.846: 93.6972% ( 22) 00:09:32.239 7561.846 - 7612.258: 93.7975% ( 19) 00:09:32.239 7612.258 - 7662.671: 93.8978% ( 19) 00:09:32.239 7662.671 - 7713.083: 94.0826% ( 35) 00:09:32.239 7713.083 - 7763.495: 94.2251% ( 27) 00:09:32.239 7763.495 - 7813.908: 94.3412% ( 22) 00:09:32.239 7813.908 - 7864.320: 94.4573% ( 22) 00:09:32.239 7864.320 - 7914.732: 94.5629% ( 20) 00:09:32.239 7914.732 - 7965.145: 94.6685% ( 20) 00:09:32.239 7965.145 - 8015.557: 94.7688% ( 19) 00:09:32.239 8015.557 - 8065.969: 94.8744% ( 20) 00:09:32.239 8065.969 - 8116.382: 95.0908% ( 41) 00:09:32.239 8116.382 - 8166.794: 95.1964% ( 20) 00:09:32.239 8166.794 - 8217.206: 95.2755% ( 15) 00:09:32.239 8217.206 - 8267.618: 95.3547% ( 15) 00:09:32.239 8267.618 - 8318.031: 95.4445% ( 17) 00:09:32.239 8318.031 - 8368.443: 95.5131% ( 13) 00:09:32.239 8368.443 - 8418.855: 95.5606% ( 9) 00:09:32.239 8418.855 - 8469.268: 95.5976% ( 7) 00:09:32.239 8469.268 - 8519.680: 95.6239% ( 5) 00:09:32.239 8519.680 - 8570.092: 95.6451% ( 4) 00:09:32.239 8570.092 - 8620.505: 95.6715% ( 5) 00:09:32.239 8620.505 - 8670.917: 95.7242% ( 10) 00:09:32.239 8670.917 - 8721.329: 95.7717% ( 9) 00:09:32.239 8721.329 - 8771.742: 95.8140% ( 8) 00:09:32.239 8771.742 - 8822.154: 95.8826% ( 13) 00:09:32.239 8822.154 - 8872.566: 96.0410% ( 30) 00:09:32.239 8872.566 - 8922.978: 96.1360% ( 18) 00:09:32.239 8922.978 - 8973.391: 96.1993% ( 12) 00:09:32.239 8973.391 - 9023.803: 96.2468% ( 9) 00:09:32.239 9023.803 - 9074.215: 96.2838% ( 7) 00:09:32.239 9074.215 - 9124.628: 96.3049% ( 4) 00:09:32.239 9124.628 - 9175.040: 96.3155% ( 2) 00:09:32.239 9175.040 - 9225.452: 96.3260% ( 2) 00:09:32.239 9225.452 - 9275.865: 96.3366% ( 2) 00:09:32.239 9275.865 - 9326.277: 96.3999% ( 12) 00:09:32.239 9326.277 - 9376.689: 96.4580% ( 11) 00:09:32.239 9376.689 - 9427.102: 96.5477% ( 17) 00:09:32.239 9427.102 - 9477.514: 96.6216% ( 14) 00:09:32.239 9477.514 - 9527.926: 96.7008% ( 15) 00:09:32.239 9527.926 - 9578.338: 96.7536% ( 10) 00:09:32.239 9578.338 - 9628.751: 96.7853% ( 6) 00:09:32.239 9628.751 - 9679.163: 96.8064% ( 4) 00:09:32.239 9679.163 - 9729.575: 96.8328% ( 5) 00:09:32.239 9729.575 - 9779.988: 96.8592% ( 5) 00:09:32.239 9779.988 - 9830.400: 96.8856% ( 5) 00:09:32.239 9830.400 - 9880.812: 96.9278% ( 8) 00:09:32.239 9880.812 - 9931.225: 96.9700% ( 8) 00:09:32.239 9931.225 - 9981.637: 97.0228% ( 10) 00:09:32.239 9981.637 - 10032.049: 97.0756% ( 10) 00:09:32.239 10032.049 - 10082.462: 97.3395% ( 50) 00:09:32.239 10082.462 - 10132.874: 97.6615% ( 61) 00:09:32.239 10132.874 - 10183.286: 97.7090% ( 9) 00:09:32.239 10183.286 - 10233.698: 97.7618% ( 10) 00:09:32.239 10233.698 - 10284.111: 97.8041% ( 8) 00:09:32.239 10284.111 - 10334.523: 97.8410% ( 7) 00:09:32.239 10334.523 - 10384.935: 97.8780% ( 7) 00:09:32.239 10384.935 - 10435.348: 97.9466% ( 13) 00:09:32.239 10435.348 - 10485.760: 98.0046% ( 11) 00:09:32.239 10485.760 - 10536.172: 98.0680% ( 12) 00:09:32.239 10536.172 - 10586.585: 98.1313% ( 12) 00:09:32.239 10586.585 - 10636.997: 98.1894% ( 11) 00:09:32.239 10636.997 - 10687.409: 98.2475% ( 11) 00:09:32.239 10687.409 - 10737.822: 98.2950% ( 9) 00:09:32.239 10737.822 - 10788.234: 98.3478% ( 10) 00:09:32.239 10788.234 - 10838.646: 98.3794% ( 6) 00:09:32.239 10838.646 - 10889.058: 98.5589% ( 34) 00:09:32.239 10889.058 - 10939.471: 98.6223% ( 12) 00:09:32.239 10939.471 - 10989.883: 98.6486% ( 5) 00:09:32.239 10989.883 - 11040.295: 98.6750% ( 5) 00:09:32.239 11040.295 - 11090.708: 98.7067% ( 6) 00:09:32.239 11090.708 - 11141.120: 98.7278% ( 4) 00:09:32.239 11141.120 - 11191.532: 98.7489% ( 4) 00:09:32.239 11191.532 - 11241.945: 98.7753% ( 5) 00:09:32.239 11241.945 - 11292.357: 98.8017% ( 5) 00:09:32.239 11292.357 - 11342.769: 98.8334% ( 6) 00:09:32.239 11342.769 - 11393.182: 98.8492% ( 3) 00:09:32.239 11393.182 - 11443.594: 98.8704% ( 4) 00:09:32.239 11443.594 - 11494.006: 98.9020% ( 6) 00:09:32.239 11494.006 - 11544.418: 98.9284% ( 5) 00:09:32.239 11544.418 - 11594.831: 98.9601% ( 6) 00:09:32.239 11594.831 - 11645.243: 98.9812% ( 4) 00:09:32.239 11645.243 - 11695.655: 98.9865% ( 1) 00:09:32.239 11695.655 - 11746.068: 99.0023% ( 3) 00:09:32.239 11746.068 - 11796.480: 99.0182% ( 3) 00:09:32.239 11796.480 - 11846.892: 99.0446% ( 5) 00:09:32.239 11846.892 - 11897.305: 99.0604% ( 3) 00:09:32.239 11897.305 - 11947.717: 99.0921% ( 6) 00:09:32.239 11947.717 - 11998.129: 99.1237% ( 6) 00:09:32.239 11998.129 - 12048.542: 99.1660% ( 8) 00:09:32.239 12048.542 - 12098.954: 99.2029% ( 7) 00:09:32.239 12098.954 - 12149.366: 99.2399% ( 7) 00:09:32.239 12149.366 - 12199.778: 99.2663% ( 5) 00:09:32.239 12199.778 - 12250.191: 99.2715% ( 1) 00:09:32.239 12250.191 - 12300.603: 99.2768% ( 1) 00:09:32.239 12300.603 - 12351.015: 99.2874% ( 2) 00:09:32.239 12351.015 - 12401.428: 99.2927% ( 1) 00:09:32.239 12401.428 - 12451.840: 99.3032% ( 2) 00:09:32.239 12451.840 - 12502.252: 99.3085% ( 1) 00:09:32.239 12502.252 - 12552.665: 99.3190% ( 2) 00:09:32.239 12552.665 - 12603.077: 99.3243% ( 1) 00:09:32.239 25508.628 - 25609.452: 99.3402% ( 3) 00:09:32.239 25609.452 - 25710.277: 99.3613% ( 4) 00:09:32.239 25710.277 - 25811.102: 99.3824% ( 4) 00:09:32.239 25811.102 - 26012.751: 99.4193% ( 7) 00:09:32.240 26012.751 - 26214.400: 99.4616% ( 8) 00:09:32.240 26214.400 - 26416.049: 99.5038% ( 8) 00:09:32.240 26416.049 - 26617.698: 99.5460% ( 8) 00:09:32.240 26617.698 - 26819.348: 99.5883% ( 8) 00:09:32.240 26819.348 - 27020.997: 99.6305% ( 8) 00:09:32.240 27020.997 - 27222.646: 99.6727% ( 8) 00:09:32.240 27222.646 - 27424.295: 99.7097% ( 7) 00:09:32.240 27424.295 - 27625.945: 99.7519% ( 8) 00:09:32.240 27625.945 - 27827.594: 99.7941% ( 8) 00:09:32.240 27827.594 - 28029.243: 99.8364% ( 8) 00:09:32.240 28029.243 - 28230.892: 99.8733% ( 7) 00:09:32.240 28230.892 - 28432.542: 99.9208% ( 9) 00:09:32.240 28432.542 - 28634.191: 99.9630% ( 8) 00:09:32.240 28634.191 - 28835.840: 100.0000% ( 7) 00:09:32.240 00:09:32.240 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:32.240 ============================================================================== 00:09:32.240 Range in us Cumulative IO count 00:09:32.240 5142.055 - 5167.262: 0.0053% ( 1) 00:09:32.240 5217.674 - 5242.880: 0.0106% ( 1) 00:09:32.240 5242.880 - 5268.086: 0.0370% ( 5) 00:09:32.240 5268.086 - 5293.292: 0.0528% ( 3) 00:09:32.240 5293.292 - 5318.498: 0.0739% ( 4) 00:09:32.240 5318.498 - 5343.705: 0.0950% ( 4) 00:09:32.240 5343.705 - 5368.911: 0.1109% ( 3) 00:09:32.240 5368.911 - 5394.117: 0.1267% ( 3) 00:09:32.240 5394.117 - 5419.323: 0.1795% ( 10) 00:09:32.240 5419.323 - 5444.529: 0.2428% ( 12) 00:09:32.240 5444.529 - 5469.735: 0.2798% ( 7) 00:09:32.240 5469.735 - 5494.942: 0.3167% ( 7) 00:09:32.240 5494.942 - 5520.148: 0.3695% ( 10) 00:09:32.240 5520.148 - 5545.354: 0.4434% ( 14) 00:09:32.240 5545.354 - 5570.560: 0.5173% ( 14) 00:09:32.240 5570.560 - 5595.766: 0.6018% ( 16) 00:09:32.240 5595.766 - 5620.972: 0.8129% ( 40) 00:09:32.240 5620.972 - 5646.178: 1.0241% ( 40) 00:09:32.240 5646.178 - 5671.385: 1.1560% ( 25) 00:09:32.240 5671.385 - 5696.591: 1.3197% ( 31) 00:09:32.240 5696.591 - 5721.797: 1.5467% ( 43) 00:09:32.240 5721.797 - 5747.003: 1.8053% ( 49) 00:09:32.240 5747.003 - 5772.209: 2.1484% ( 65) 00:09:32.240 5772.209 - 5797.415: 2.5232% ( 71) 00:09:32.240 5797.415 - 5822.622: 2.9350% ( 78) 00:09:32.240 5822.622 - 5847.828: 3.4153% ( 91) 00:09:32.240 5847.828 - 5873.034: 3.9960% ( 110) 00:09:32.240 5873.034 - 5898.240: 4.6136% ( 117) 00:09:32.240 5898.240 - 5923.446: 5.3685% ( 143) 00:09:32.240 5923.446 - 5948.652: 6.1286% ( 144) 00:09:32.240 5948.652 - 5973.858: 7.0735% ( 179) 00:09:32.240 5973.858 - 5999.065: 8.2348% ( 220) 00:09:32.240 5999.065 - 6024.271: 9.5386% ( 247) 00:09:32.240 6024.271 - 6049.477: 10.9745% ( 272) 00:09:32.240 6049.477 - 6074.683: 12.7270% ( 332) 00:09:32.240 6074.683 - 6099.889: 14.5481% ( 345) 00:09:32.240 6099.889 - 6125.095: 16.3060% ( 333) 00:09:32.240 6125.095 - 6150.302: 18.3858% ( 394) 00:09:32.240 6150.302 - 6175.508: 20.2808% ( 359) 00:09:32.240 6175.508 - 6200.714: 22.2656% ( 376) 00:09:32.240 6200.714 - 6225.920: 24.5144% ( 426) 00:09:32.240 6225.920 - 6251.126: 27.0006% ( 471) 00:09:32.240 6251.126 - 6276.332: 29.1068% ( 399) 00:09:32.240 6276.332 - 6301.538: 31.0283% ( 364) 00:09:32.240 6301.538 - 6326.745: 33.5251% ( 473) 00:09:32.240 6326.745 - 6351.951: 36.2437% ( 515) 00:09:32.240 6351.951 - 6377.157: 39.1100% ( 543) 00:09:32.240 6377.157 - 6402.363: 42.0978% ( 566) 00:09:32.240 6402.363 - 6427.569: 45.7717% ( 696) 00:09:32.240 6427.569 - 6452.775: 48.9126% ( 595) 00:09:32.240 6452.775 - 6503.188: 54.5766% ( 1073) 00:09:32.240 6503.188 - 6553.600: 59.4278% ( 919) 00:09:32.240 6553.600 - 6604.012: 63.7563% ( 820) 00:09:32.240 6604.012 - 6654.425: 66.7863% ( 574) 00:09:32.240 6654.425 - 6704.837: 70.0380% ( 616) 00:09:32.240 6704.837 - 6755.249: 72.9835% ( 558) 00:09:32.240 6755.249 - 6805.662: 75.8393% ( 541) 00:09:32.240 6805.662 - 6856.074: 78.4892% ( 502) 00:09:32.240 6856.074 - 6906.486: 81.3292% ( 538) 00:09:32.240 6906.486 - 6956.898: 83.5410% ( 419) 00:09:32.240 6956.898 - 7007.311: 85.3885% ( 350) 00:09:32.240 7007.311 - 7057.723: 87.0988% ( 324) 00:09:32.240 7057.723 - 7108.135: 88.5399% ( 273) 00:09:32.240 7108.135 - 7158.548: 89.7487% ( 229) 00:09:32.240 7158.548 - 7208.960: 90.8625% ( 211) 00:09:32.240 7208.960 - 7259.372: 91.7335% ( 165) 00:09:32.240 7259.372 - 7309.785: 92.4039% ( 127) 00:09:32.240 7309.785 - 7360.197: 93.1377% ( 139) 00:09:32.240 7360.197 - 7410.609: 93.6550% ( 98) 00:09:32.240 7410.609 - 7461.022: 94.0087% ( 67) 00:09:32.240 7461.022 - 7511.434: 94.2620% ( 48) 00:09:32.240 7511.434 - 7561.846: 94.4468% ( 35) 00:09:32.240 7561.846 - 7612.258: 94.6052% ( 30) 00:09:32.240 7612.258 - 7662.671: 94.7688% ( 31) 00:09:32.240 7662.671 - 7713.083: 94.9694% ( 38) 00:09:32.240 7713.083 - 7763.495: 95.0908% ( 23) 00:09:32.240 7763.495 - 7813.908: 95.1911% ( 19) 00:09:32.240 7813.908 - 7864.320: 95.3283% ( 26) 00:09:32.240 7864.320 - 7914.732: 95.4761% ( 28) 00:09:32.240 7914.732 - 7965.145: 95.5659% ( 17) 00:09:32.240 7965.145 - 8015.557: 95.6609% ( 18) 00:09:32.240 8015.557 - 8065.969: 95.7242% ( 12) 00:09:32.240 8065.969 - 8116.382: 95.7770% ( 10) 00:09:32.240 8116.382 - 8166.794: 95.8404% ( 12) 00:09:32.240 8166.794 - 8217.206: 95.9090% ( 13) 00:09:32.240 8217.206 - 8267.618: 96.0410% ( 25) 00:09:32.240 8267.618 - 8318.031: 96.1571% ( 22) 00:09:32.240 8318.031 - 8368.443: 96.1888% ( 6) 00:09:32.240 8368.443 - 8418.855: 96.2152% ( 5) 00:09:32.240 8418.855 - 8469.268: 96.2416% ( 5) 00:09:32.240 8469.268 - 8519.680: 96.2627% ( 4) 00:09:32.240 8519.680 - 8570.092: 96.2891% ( 5) 00:09:32.240 8570.092 - 8620.505: 96.3102% ( 4) 00:09:32.240 8620.505 - 8670.917: 96.3260% ( 3) 00:09:32.240 8670.917 - 8721.329: 96.3471% ( 4) 00:09:32.240 8721.329 - 8771.742: 96.3682% ( 4) 00:09:32.240 8771.742 - 8822.154: 96.3894% ( 4) 00:09:32.240 8822.154 - 8872.566: 96.3999% ( 2) 00:09:32.240 8872.566 - 8922.978: 96.4105% ( 2) 00:09:32.240 8922.978 - 8973.391: 96.4158% ( 1) 00:09:32.240 8973.391 - 9023.803: 96.4263% ( 2) 00:09:32.240 9023.803 - 9074.215: 96.4369% ( 2) 00:09:32.240 9074.215 - 9124.628: 96.4474% ( 2) 00:09:32.240 9124.628 - 9175.040: 96.4580% ( 2) 00:09:32.240 9175.040 - 9225.452: 96.4685% ( 2) 00:09:32.240 9225.452 - 9275.865: 96.4791% ( 2) 00:09:32.240 9275.865 - 9326.277: 96.4949% ( 3) 00:09:32.240 9326.277 - 9376.689: 96.5055% ( 2) 00:09:32.240 9376.689 - 9427.102: 96.5108% ( 1) 00:09:32.240 9427.102 - 9477.514: 96.5213% ( 2) 00:09:32.240 9477.514 - 9527.926: 96.5319% ( 2) 00:09:32.240 9527.926 - 9578.338: 96.5424% ( 2) 00:09:32.240 9578.338 - 9628.751: 96.5530% ( 2) 00:09:32.240 9628.751 - 9679.163: 96.5636% ( 2) 00:09:32.240 9679.163 - 9729.575: 96.5847% ( 4) 00:09:32.240 9729.575 - 9779.988: 96.6269% ( 8) 00:09:32.240 9779.988 - 9830.400: 96.6691% ( 8) 00:09:32.240 9830.400 - 9880.812: 96.7061% ( 7) 00:09:32.240 9880.812 - 9931.225: 96.7430% ( 7) 00:09:32.240 9931.225 - 9981.637: 96.8011% ( 11) 00:09:32.240 9981.637 - 10032.049: 96.9120% ( 21) 00:09:32.240 10032.049 - 10082.462: 97.1495% ( 45) 00:09:32.240 10082.462 - 10132.874: 97.2076% ( 11) 00:09:32.240 10132.874 - 10183.286: 97.2551% ( 9) 00:09:32.240 10183.286 - 10233.698: 97.3237% ( 13) 00:09:32.240 10233.698 - 10284.111: 97.3765% ( 10) 00:09:32.240 10284.111 - 10334.523: 97.4451% ( 13) 00:09:32.240 10334.523 - 10384.935: 97.4979% ( 10) 00:09:32.240 10384.935 - 10435.348: 97.5560% ( 11) 00:09:32.240 10435.348 - 10485.760: 97.7565% ( 38) 00:09:32.240 10485.760 - 10536.172: 97.8674% ( 21) 00:09:32.240 10536.172 - 10586.585: 97.9149% ( 9) 00:09:32.240 10586.585 - 10636.997: 97.9571% ( 8) 00:09:32.240 10636.997 - 10687.409: 98.0046% ( 9) 00:09:32.240 10687.409 - 10737.822: 98.0522% ( 9) 00:09:32.240 10737.822 - 10788.234: 98.0944% ( 8) 00:09:32.240 10788.234 - 10838.646: 98.1261% ( 6) 00:09:32.240 10838.646 - 10889.058: 98.1630% ( 7) 00:09:32.240 10889.058 - 10939.471: 98.1947% ( 6) 00:09:32.240 10939.471 - 10989.883: 98.2264% ( 6) 00:09:32.241 10989.883 - 11040.295: 98.2633% ( 7) 00:09:32.241 11040.295 - 11090.708: 98.3003% ( 7) 00:09:32.241 11090.708 - 11141.120: 98.3214% ( 4) 00:09:32.241 11141.120 - 11191.532: 98.3372% ( 3) 00:09:32.241 11191.532 - 11241.945: 98.3530% ( 3) 00:09:32.241 11241.945 - 11292.357: 98.3689% ( 3) 00:09:32.241 11292.357 - 11342.769: 98.3900% ( 4) 00:09:32.241 11342.769 - 11393.182: 98.4164% ( 5) 00:09:32.241 11393.182 - 11443.594: 98.4428% ( 5) 00:09:32.241 11443.594 - 11494.006: 98.4745% ( 6) 00:09:32.241 11494.006 - 11544.418: 98.5061% ( 6) 00:09:32.241 11544.418 - 11594.831: 98.5378% ( 6) 00:09:32.241 11594.831 - 11645.243: 98.5695% ( 6) 00:09:32.241 11645.243 - 11695.655: 98.5959% ( 5) 00:09:32.241 11695.655 - 11746.068: 98.6381% ( 8) 00:09:32.241 11746.068 - 11796.480: 98.6698% ( 6) 00:09:32.241 11796.480 - 11846.892: 98.7067% ( 7) 00:09:32.241 11846.892 - 11897.305: 98.7384% ( 6) 00:09:32.241 11897.305 - 11947.717: 98.7648% ( 5) 00:09:32.241 11947.717 - 11998.129: 98.8070% ( 8) 00:09:32.241 11998.129 - 12048.542: 98.8440% ( 7) 00:09:32.241 12048.542 - 12098.954: 98.8598% ( 3) 00:09:32.241 12098.954 - 12149.366: 98.8704% ( 2) 00:09:32.241 12149.366 - 12199.778: 98.8862% ( 3) 00:09:32.241 12199.778 - 12250.191: 98.9020% ( 3) 00:09:32.241 12250.191 - 12300.603: 98.9126% ( 2) 00:09:32.241 12300.603 - 12351.015: 98.9231% ( 2) 00:09:32.241 12351.015 - 12401.428: 98.9337% ( 2) 00:09:32.241 12401.428 - 12451.840: 98.9548% ( 4) 00:09:32.241 12451.840 - 12502.252: 98.9654% ( 2) 00:09:32.241 12502.252 - 12552.665: 98.9812% ( 3) 00:09:32.241 12552.665 - 12603.077: 98.9918% ( 2) 00:09:32.241 12603.077 - 12653.489: 99.0023% ( 2) 00:09:32.241 12653.489 - 12703.902: 99.0182% ( 3) 00:09:32.241 12703.902 - 12754.314: 99.0393% ( 4) 00:09:32.241 12754.314 - 12804.726: 99.0657% ( 5) 00:09:32.241 12804.726 - 12855.138: 99.0973% ( 6) 00:09:32.241 12855.138 - 12905.551: 99.1132% ( 3) 00:09:32.241 12905.551 - 13006.375: 99.1818% ( 13) 00:09:32.241 13006.375 - 13107.200: 99.2504% ( 13) 00:09:32.241 13107.200 - 13208.025: 99.2768% ( 5) 00:09:32.241 13208.025 - 13308.849: 99.2927% ( 3) 00:09:32.241 13308.849 - 13409.674: 99.3085% ( 3) 00:09:32.241 13409.674 - 13510.498: 99.3243% ( 3) 00:09:32.241 24298.732 - 24399.557: 99.3296% ( 1) 00:09:32.241 24399.557 - 24500.382: 99.3507% ( 4) 00:09:32.241 24500.382 - 24601.206: 99.3718% ( 4) 00:09:32.241 24601.206 - 24702.031: 99.3929% ( 4) 00:09:32.241 24702.031 - 24802.855: 99.4141% ( 4) 00:09:32.241 24802.855 - 24903.680: 99.4352% ( 4) 00:09:32.241 24903.680 - 25004.505: 99.4563% ( 4) 00:09:32.241 25004.505 - 25105.329: 99.4774% ( 4) 00:09:32.241 25105.329 - 25206.154: 99.4985% ( 4) 00:09:32.241 25206.154 - 25306.978: 99.5249% ( 5) 00:09:32.241 25306.978 - 25407.803: 99.5460% ( 4) 00:09:32.241 25407.803 - 25508.628: 99.5671% ( 4) 00:09:32.241 25508.628 - 25609.452: 99.5883% ( 4) 00:09:32.241 25609.452 - 25710.277: 99.6094% ( 4) 00:09:32.241 25710.277 - 25811.102: 99.6305% ( 4) 00:09:32.241 25811.102 - 26012.751: 99.6727% ( 8) 00:09:32.241 26012.751 - 26214.400: 99.7149% ( 8) 00:09:32.241 26214.400 - 26416.049: 99.7572% ( 8) 00:09:32.241 26416.049 - 26617.698: 99.7994% ( 8) 00:09:32.241 26617.698 - 26819.348: 99.8416% ( 8) 00:09:32.241 26819.348 - 27020.997: 99.8839% ( 8) 00:09:32.241 27020.997 - 27222.646: 99.9261% ( 8) 00:09:32.241 27222.646 - 27424.295: 99.9683% ( 8) 00:09:32.241 27424.295 - 27625.945: 100.0000% ( 6) 00:09:32.241 00:09:32.241 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:32.241 ============================================================================== 00:09:32.241 Range in us Cumulative IO count 00:09:32.241 5217.674 - 5242.880: 0.0106% ( 2) 00:09:32.241 5293.292 - 5318.498: 0.0158% ( 1) 00:09:32.241 5343.705 - 5368.911: 0.0264% ( 2) 00:09:32.241 5368.911 - 5394.117: 0.0317% ( 1) 00:09:32.241 5419.323 - 5444.529: 0.0370% ( 1) 00:09:32.241 5469.735 - 5494.942: 0.0581% ( 4) 00:09:32.241 5520.148 - 5545.354: 0.1003% ( 8) 00:09:32.241 5545.354 - 5570.560: 0.1478% ( 9) 00:09:32.241 5570.560 - 5595.766: 0.1848% ( 7) 00:09:32.241 5595.766 - 5620.972: 0.2745% ( 17) 00:09:32.241 5620.972 - 5646.178: 0.3326% ( 11) 00:09:32.241 5646.178 - 5671.385: 0.4645% ( 25) 00:09:32.241 5671.385 - 5696.591: 0.6176% ( 29) 00:09:32.241 5696.591 - 5721.797: 0.7654% ( 28) 00:09:32.241 5721.797 - 5747.003: 0.9607% ( 37) 00:09:32.241 5747.003 - 5772.209: 1.2247% ( 50) 00:09:32.241 5772.209 - 5797.415: 1.5731% ( 66) 00:09:32.241 5797.415 - 5822.622: 1.9584% ( 73) 00:09:32.241 5822.622 - 5847.828: 2.3649% ( 77) 00:09:32.241 5847.828 - 5873.034: 2.8822% ( 98) 00:09:32.241 5873.034 - 5898.240: 3.6106% ( 138) 00:09:32.241 5898.240 - 5923.446: 4.3127% ( 133) 00:09:32.241 5923.446 - 5948.652: 5.0570% ( 141) 00:09:32.241 5948.652 - 5973.858: 5.9966% ( 178) 00:09:32.241 5973.858 - 5999.065: 6.9732% ( 185) 00:09:32.241 5999.065 - 6024.271: 8.2770% ( 247) 00:09:32.241 6024.271 - 6049.477: 9.6812% ( 266) 00:09:32.241 6049.477 - 6074.683: 11.4337% ( 332) 00:09:32.241 6074.683 - 6099.889: 12.9909% ( 295) 00:09:32.241 6099.889 - 6125.095: 14.8385% ( 350) 00:09:32.241 6125.095 - 6150.302: 16.7441% ( 361) 00:09:32.241 6150.302 - 6175.508: 18.6391% ( 359) 00:09:32.241 6175.508 - 6200.714: 20.8562% ( 420) 00:09:32.241 6200.714 - 6225.920: 22.6668% ( 343) 00:09:32.241 6225.920 - 6251.126: 24.4563% ( 339) 00:09:32.241 6251.126 - 6276.332: 26.6470% ( 415) 00:09:32.241 6276.332 - 6301.538: 29.1913% ( 482) 00:09:32.241 6301.538 - 6326.745: 31.9732% ( 527) 00:09:32.241 6326.745 - 6351.951: 34.8976% ( 554) 00:09:32.241 6351.951 - 6377.157: 37.7270% ( 536) 00:09:32.241 6377.157 - 6402.363: 41.4326% ( 702) 00:09:32.241 6402.363 - 6427.569: 45.1383% ( 702) 00:09:32.241 6427.569 - 6452.775: 48.1947% ( 579) 00:09:32.241 6452.775 - 6503.188: 54.0857% ( 1116) 00:09:32.241 6503.188 - 6553.600: 59.8237% ( 1087) 00:09:32.241 6553.600 - 6604.012: 64.4637% ( 879) 00:09:32.241 6604.012 - 6654.425: 68.4913% ( 763) 00:09:32.241 6654.425 - 6704.837: 71.9331% ( 652) 00:09:32.241 6704.837 - 6755.249: 74.9894% ( 579) 00:09:32.241 6755.249 - 6805.662: 77.7027% ( 514) 00:09:32.241 6805.662 - 6856.074: 80.0834% ( 451) 00:09:32.241 6856.074 - 6906.486: 82.3374% ( 427) 00:09:32.241 6906.486 - 6956.898: 84.4489% ( 400) 00:09:32.241 6956.898 - 7007.311: 86.3281% ( 356) 00:09:32.241 7007.311 - 7057.723: 88.2654% ( 367) 00:09:32.241 7057.723 - 7108.135: 89.8543% ( 301) 00:09:32.241 7108.135 - 7158.548: 91.2954% ( 273) 00:09:32.241 7158.548 - 7208.960: 92.3670% ( 203) 00:09:32.241 7208.960 - 7259.372: 93.1957% ( 157) 00:09:32.241 7259.372 - 7309.785: 93.7342% ( 102) 00:09:32.241 7309.785 - 7360.197: 94.1090% ( 71) 00:09:32.241 7360.197 - 7410.609: 94.4257% ( 60) 00:09:32.241 7410.609 - 7461.022: 94.6685% ( 46) 00:09:32.241 7461.022 - 7511.434: 94.8321% ( 31) 00:09:32.241 7511.434 - 7561.846: 94.9588% ( 24) 00:09:32.241 7561.846 - 7612.258: 95.0697% ( 21) 00:09:32.241 7612.258 - 7662.671: 95.1594% ( 17) 00:09:32.241 7662.671 - 7713.083: 95.2492% ( 17) 00:09:32.241 7713.083 - 7763.495: 95.3283% ( 15) 00:09:32.241 7763.495 - 7813.908: 95.3864% ( 11) 00:09:32.241 7813.908 - 7864.320: 95.4497% ( 12) 00:09:32.241 7864.320 - 7914.732: 95.5131% ( 12) 00:09:32.241 7914.732 - 7965.145: 95.5817% ( 13) 00:09:32.241 7965.145 - 8015.557: 95.6398% ( 11) 00:09:32.241 8015.557 - 8065.969: 95.7190% ( 15) 00:09:32.241 8065.969 - 8116.382: 95.8193% ( 19) 00:09:32.241 8116.382 - 8166.794: 95.9407% ( 23) 00:09:32.241 8166.794 - 8217.206: 96.1201% ( 34) 00:09:32.241 8217.206 - 8267.618: 96.2099% ( 17) 00:09:32.241 8267.618 - 8318.031: 96.2468% ( 7) 00:09:32.241 8318.031 - 8368.443: 96.2785% ( 6) 00:09:32.241 8368.443 - 8418.855: 96.3102% ( 6) 00:09:32.241 8418.855 - 8469.268: 96.3471% ( 7) 00:09:32.241 8469.268 - 8519.680: 96.3841% ( 7) 00:09:32.241 8519.680 - 8570.092: 96.4158% ( 6) 00:09:32.241 8570.092 - 8620.505: 96.4527% ( 7) 00:09:32.241 8620.505 - 8670.917: 96.4844% ( 6) 00:09:32.242 8670.917 - 8721.329: 96.5160% ( 6) 00:09:32.242 8721.329 - 8771.742: 96.5530% ( 7) 00:09:32.242 8771.742 - 8822.154: 96.5847% ( 6) 00:09:32.242 8822.154 - 8872.566: 96.6005% ( 3) 00:09:32.242 8872.566 - 8922.978: 96.6163% ( 3) 00:09:32.242 8922.978 - 8973.391: 96.6216% ( 1) 00:09:32.242 9124.628 - 9175.040: 96.6269% ( 1) 00:09:32.242 9175.040 - 9225.452: 96.6533% ( 5) 00:09:32.242 9225.452 - 9275.865: 96.6850% ( 6) 00:09:32.242 9275.865 - 9326.277: 96.7166% ( 6) 00:09:32.242 9326.277 - 9376.689: 96.7430% ( 5) 00:09:32.242 9376.689 - 9427.102: 96.7694% ( 5) 00:09:32.242 9427.102 - 9477.514: 96.8064% ( 7) 00:09:32.242 9477.514 - 9527.926: 96.8486% ( 8) 00:09:32.242 9527.926 - 9578.338: 96.9014% ( 10) 00:09:32.242 9578.338 - 9628.751: 97.0967% ( 37) 00:09:32.242 9628.751 - 9679.163: 97.1125% ( 3) 00:09:32.242 9679.163 - 9729.575: 97.1178% ( 1) 00:09:32.242 9729.575 - 9779.988: 97.1284% ( 2) 00:09:32.242 9779.988 - 9830.400: 97.1337% ( 1) 00:09:32.242 9830.400 - 9880.812: 97.1389% ( 1) 00:09:32.242 9880.812 - 9931.225: 97.1495% ( 2) 00:09:32.242 9931.225 - 9981.637: 97.1548% ( 1) 00:09:32.242 9981.637 - 10032.049: 97.1653% ( 2) 00:09:32.242 10032.049 - 10082.462: 97.1917% ( 5) 00:09:32.242 10082.462 - 10132.874: 97.2287% ( 7) 00:09:32.242 10132.874 - 10183.286: 97.2762% ( 9) 00:09:32.242 10183.286 - 10233.698: 97.3131% ( 7) 00:09:32.242 10233.698 - 10284.111: 97.3606% ( 9) 00:09:32.242 10284.111 - 10334.523: 97.4134% ( 10) 00:09:32.242 10334.523 - 10384.935: 97.4768% ( 12) 00:09:32.242 10384.935 - 10435.348: 97.5348% ( 11) 00:09:32.242 10435.348 - 10485.760: 97.6035% ( 13) 00:09:32.242 10485.760 - 10536.172: 97.6774% ( 14) 00:09:32.242 10536.172 - 10586.585: 97.8410% ( 31) 00:09:32.242 10586.585 - 10636.997: 97.8991% ( 11) 00:09:32.242 10636.997 - 10687.409: 97.9360% ( 7) 00:09:32.242 10687.409 - 10737.822: 97.9677% ( 6) 00:09:32.242 10737.822 - 10788.234: 97.9994% ( 6) 00:09:32.242 10788.234 - 10838.646: 98.0363% ( 7) 00:09:32.242 10838.646 - 10889.058: 98.0680% ( 6) 00:09:32.242 10889.058 - 10939.471: 98.0997% ( 6) 00:09:32.242 10939.471 - 10989.883: 98.1366% ( 7) 00:09:32.242 10989.883 - 11040.295: 98.1683% ( 6) 00:09:32.242 11040.295 - 11090.708: 98.2000% ( 6) 00:09:32.242 11090.708 - 11141.120: 98.2316% ( 6) 00:09:32.242 11141.120 - 11191.532: 98.2686% ( 7) 00:09:32.242 11191.532 - 11241.945: 98.2950% ( 5) 00:09:32.242 11241.945 - 11292.357: 98.3372% ( 8) 00:09:32.242 11292.357 - 11342.769: 98.3636% ( 5) 00:09:32.242 11342.769 - 11393.182: 98.3794% ( 3) 00:09:32.242 11393.182 - 11443.594: 98.3953% ( 3) 00:09:32.242 11443.594 - 11494.006: 98.4111% ( 3) 00:09:32.242 11494.006 - 11544.418: 98.4322% ( 4) 00:09:32.242 11544.418 - 11594.831: 98.4481% ( 3) 00:09:32.242 11594.831 - 11645.243: 98.4692% ( 4) 00:09:32.242 11645.243 - 11695.655: 98.4850% ( 3) 00:09:32.242 11695.655 - 11746.068: 98.5061% ( 4) 00:09:32.242 11746.068 - 11796.480: 98.5220% ( 3) 00:09:32.242 11796.480 - 11846.892: 98.5431% ( 4) 00:09:32.242 11846.892 - 11897.305: 98.5642% ( 4) 00:09:32.242 11897.305 - 11947.717: 98.5800% ( 3) 00:09:32.242 11947.717 - 11998.129: 98.6011% ( 4) 00:09:32.242 11998.129 - 12048.542: 98.6223% ( 4) 00:09:32.242 12048.542 - 12098.954: 98.6434% ( 4) 00:09:32.242 12098.954 - 12149.366: 98.6486% ( 1) 00:09:32.242 12653.489 - 12703.902: 98.6592% ( 2) 00:09:32.242 12703.902 - 12754.314: 98.6750% ( 3) 00:09:32.242 12754.314 - 12804.726: 98.6803% ( 1) 00:09:32.242 12804.726 - 12855.138: 98.7067% ( 5) 00:09:32.242 12855.138 - 12905.551: 98.7120% ( 1) 00:09:32.242 12905.551 - 13006.375: 98.7437% ( 6) 00:09:32.242 13006.375 - 13107.200: 98.7701% ( 5) 00:09:32.242 13107.200 - 13208.025: 98.7965% ( 5) 00:09:32.242 13208.025 - 13308.849: 98.8281% ( 6) 00:09:32.242 13308.849 - 13409.674: 98.8492% ( 4) 00:09:32.242 13409.674 - 13510.498: 98.8809% ( 6) 00:09:32.242 13510.498 - 13611.323: 98.9073% ( 5) 00:09:32.242 13611.323 - 13712.148: 98.9337% ( 5) 00:09:32.242 13712.148 - 13812.972: 98.9654% ( 6) 00:09:32.242 13812.972 - 13913.797: 98.9812% ( 3) 00:09:32.242 13913.797 - 14014.622: 99.0076% ( 5) 00:09:32.242 14014.622 - 14115.446: 99.0340% ( 5) 00:09:32.242 14115.446 - 14216.271: 99.0551% ( 4) 00:09:32.242 14216.271 - 14317.095: 99.1026% ( 9) 00:09:32.242 14317.095 - 14417.920: 99.2451% ( 27) 00:09:32.242 14417.920 - 14518.745: 99.2663% ( 4) 00:09:32.242 14518.745 - 14619.569: 99.2821% ( 3) 00:09:32.242 14619.569 - 14720.394: 99.2979% ( 3) 00:09:32.242 14720.394 - 14821.218: 99.3190% ( 4) 00:09:32.242 14821.218 - 14922.043: 99.3243% ( 1) 00:09:32.242 23088.837 - 23189.662: 99.3454% ( 4) 00:09:32.242 23189.662 - 23290.486: 99.3666% ( 4) 00:09:32.242 23290.486 - 23391.311: 99.3877% ( 4) 00:09:32.242 23391.311 - 23492.135: 99.4088% ( 4) 00:09:32.242 23492.135 - 23592.960: 99.4299% ( 4) 00:09:32.242 23592.960 - 23693.785: 99.4510% ( 4) 00:09:32.242 23693.785 - 23794.609: 99.4668% ( 3) 00:09:32.242 23794.609 - 23895.434: 99.4932% ( 5) 00:09:32.242 23895.434 - 23996.258: 99.5144% ( 4) 00:09:32.242 23996.258 - 24097.083: 99.5302% ( 3) 00:09:32.242 24097.083 - 24197.908: 99.5513% ( 4) 00:09:32.242 24197.908 - 24298.732: 99.5724% ( 4) 00:09:32.242 24298.732 - 24399.557: 99.5935% ( 4) 00:09:32.242 24399.557 - 24500.382: 99.6147% ( 4) 00:09:32.242 24500.382 - 24601.206: 99.6358% ( 4) 00:09:32.242 24601.206 - 24702.031: 99.6569% ( 4) 00:09:32.242 24702.031 - 24802.855: 99.6780% ( 4) 00:09:32.242 24802.855 - 24903.680: 99.6991% ( 4) 00:09:32.242 24903.680 - 25004.505: 99.7202% ( 4) 00:09:32.242 25004.505 - 25105.329: 99.7413% ( 4) 00:09:32.242 25105.329 - 25206.154: 99.7625% ( 4) 00:09:32.242 25206.154 - 25306.978: 99.7836% ( 4) 00:09:32.242 25306.978 - 25407.803: 99.8047% ( 4) 00:09:32.242 25407.803 - 25508.628: 99.8258% ( 4) 00:09:32.242 25508.628 - 25609.452: 99.8469% ( 4) 00:09:32.242 25609.452 - 25710.277: 99.8680% ( 4) 00:09:32.242 25710.277 - 25811.102: 99.8891% ( 4) 00:09:32.242 25811.102 - 26012.751: 99.9261% ( 7) 00:09:32.242 26012.751 - 26214.400: 99.9683% ( 8) 00:09:32.242 26214.400 - 26416.049: 100.0000% ( 6) 00:09:32.242 00:09:32.242 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:32.242 ============================================================================== 00:09:32.242 Range in us Cumulative IO count 00:09:32.242 5192.468 - 5217.674: 0.0053% ( 1) 00:09:32.242 5217.674 - 5242.880: 0.0106% ( 1) 00:09:32.242 5242.880 - 5268.086: 0.0158% ( 1) 00:09:32.242 5293.292 - 5318.498: 0.0211% ( 1) 00:09:32.242 5343.705 - 5368.911: 0.0264% ( 1) 00:09:32.242 5368.911 - 5394.117: 0.0475% ( 4) 00:09:32.242 5394.117 - 5419.323: 0.0739% ( 5) 00:09:32.242 5419.323 - 5444.529: 0.1003% ( 5) 00:09:32.243 5444.529 - 5469.735: 0.1109% ( 2) 00:09:32.243 5469.735 - 5494.942: 0.1425% ( 6) 00:09:32.243 5494.942 - 5520.148: 0.1742% ( 6) 00:09:32.243 5520.148 - 5545.354: 0.2164% ( 8) 00:09:32.243 5545.354 - 5570.560: 0.2534% ( 7) 00:09:32.243 5570.560 - 5595.766: 0.3378% ( 16) 00:09:32.243 5595.766 - 5620.972: 0.4170% ( 15) 00:09:32.243 5620.972 - 5646.178: 0.5120% ( 18) 00:09:32.243 5646.178 - 5671.385: 0.6071% ( 18) 00:09:32.243 5671.385 - 5696.591: 0.7285% ( 23) 00:09:32.243 5696.591 - 5721.797: 0.9396% ( 40) 00:09:32.243 5721.797 - 5747.003: 1.1296% ( 36) 00:09:32.243 5747.003 - 5772.209: 1.3250% ( 37) 00:09:32.243 5772.209 - 5797.415: 1.7103% ( 73) 00:09:32.243 5797.415 - 5822.622: 2.0165% ( 58) 00:09:32.243 5822.622 - 5847.828: 2.3174% ( 57) 00:09:32.243 5847.828 - 5873.034: 2.6974% ( 72) 00:09:32.243 5873.034 - 5898.240: 3.1514% ( 86) 00:09:32.243 5898.240 - 5923.446: 3.7743% ( 118) 00:09:32.243 5923.446 - 5948.652: 4.4605% ( 130) 00:09:32.243 5948.652 - 5973.858: 5.3526% ( 169) 00:09:32.243 5973.858 - 5999.065: 6.3450% ( 188) 00:09:32.243 5999.065 - 6024.271: 7.5591% ( 230) 00:09:32.243 6024.271 - 6049.477: 8.8788% ( 250) 00:09:32.243 6049.477 - 6074.683: 10.4307% ( 294) 00:09:32.243 6074.683 - 6099.889: 12.2044% ( 336) 00:09:32.243 6099.889 - 6125.095: 13.9569% ( 332) 00:09:32.243 6125.095 - 6150.302: 15.8995% ( 368) 00:09:32.243 6150.302 - 6175.508: 17.9476% ( 388) 00:09:32.243 6175.508 - 6200.714: 19.8955% ( 369) 00:09:32.243 6200.714 - 6225.920: 21.8222% ( 365) 00:09:32.243 6225.920 - 6251.126: 23.8070% ( 376) 00:09:32.243 6251.126 - 6276.332: 26.5519% ( 520) 00:09:32.243 6276.332 - 6301.538: 29.3391% ( 528) 00:09:32.243 6301.538 - 6326.745: 31.7462% ( 456) 00:09:32.243 6326.745 - 6351.951: 34.0688% ( 440) 00:09:32.243 6351.951 - 6377.157: 37.0935% ( 573) 00:09:32.243 6377.157 - 6402.363: 40.1235% ( 574) 00:09:32.243 6402.363 - 6427.569: 43.4808% ( 636) 00:09:32.243 6427.569 - 6452.775: 47.1495% ( 695) 00:09:32.243 6452.775 - 6503.188: 53.4364% ( 1191) 00:09:32.243 6503.188 - 6553.600: 59.0160% ( 1057) 00:09:32.243 6553.600 - 6604.012: 63.6613% ( 880) 00:09:32.243 6604.012 - 6654.425: 67.9846% ( 819) 00:09:32.243 6654.425 - 6704.837: 71.2257% ( 614) 00:09:32.243 6704.837 - 6755.249: 74.1501% ( 554) 00:09:32.243 6755.249 - 6805.662: 76.8739% ( 516) 00:09:32.243 6805.662 - 6856.074: 79.3444% ( 468) 00:09:32.243 6856.074 - 6906.486: 81.5614% ( 420) 00:09:32.243 6906.486 - 6956.898: 83.6043% ( 387) 00:09:32.243 6956.898 - 7007.311: 85.9111% ( 437) 00:09:32.243 7007.311 - 7057.723: 87.8378% ( 365) 00:09:32.243 7057.723 - 7108.135: 89.2420% ( 266) 00:09:32.243 7108.135 - 7158.548: 90.4033% ( 220) 00:09:32.243 7158.548 - 7208.960: 91.4379% ( 196) 00:09:32.243 7208.960 - 7259.372: 92.4778% ( 197) 00:09:32.243 7259.372 - 7309.785: 93.1007% ( 118) 00:09:32.243 7309.785 - 7360.197: 93.5125% ( 78) 00:09:32.243 7360.197 - 7410.609: 93.8239% ( 59) 00:09:32.243 7410.609 - 7461.022: 94.1142% ( 55) 00:09:32.243 7461.022 - 7511.434: 94.3676% ( 48) 00:09:32.243 7511.434 - 7561.846: 94.5788% ( 40) 00:09:32.243 7561.846 - 7612.258: 94.7107% ( 25) 00:09:32.243 7612.258 - 7662.671: 94.8269% ( 22) 00:09:32.243 7662.671 - 7713.083: 94.9588% ( 25) 00:09:32.243 7713.083 - 7763.495: 95.0855% ( 24) 00:09:32.243 7763.495 - 7813.908: 95.2122% ( 24) 00:09:32.243 7813.908 - 7864.320: 95.3389% ( 24) 00:09:32.243 7864.320 - 7914.732: 95.4234% ( 16) 00:09:32.243 7914.732 - 7965.145: 95.4761% ( 10) 00:09:32.243 7965.145 - 8015.557: 95.5500% ( 14) 00:09:32.243 8015.557 - 8065.969: 95.5923% ( 8) 00:09:32.243 8065.969 - 8116.382: 95.6292% ( 7) 00:09:32.243 8116.382 - 8166.794: 95.6715% ( 8) 00:09:32.243 8166.794 - 8217.206: 95.7137% ( 8) 00:09:32.243 8217.206 - 8267.618: 95.7559% ( 8) 00:09:32.243 8267.618 - 8318.031: 95.7929% ( 7) 00:09:32.243 8318.031 - 8368.443: 95.8351% ( 8) 00:09:32.243 8368.443 - 8418.855: 95.8826% ( 9) 00:09:32.243 8418.855 - 8469.268: 95.9407% ( 11) 00:09:32.243 8469.268 - 8519.680: 96.0462% ( 20) 00:09:32.243 8519.680 - 8570.092: 96.1835% ( 26) 00:09:32.243 8570.092 - 8620.505: 96.2679% ( 16) 00:09:32.243 8620.505 - 8670.917: 96.3894% ( 23) 00:09:32.243 8670.917 - 8721.329: 96.4738% ( 16) 00:09:32.243 8721.329 - 8771.742: 96.5477% ( 14) 00:09:32.243 8771.742 - 8822.154: 96.6111% ( 12) 00:09:32.243 8822.154 - 8872.566: 96.6797% ( 13) 00:09:32.243 8872.566 - 8922.978: 96.7800% ( 19) 00:09:32.243 8922.978 - 8973.391: 96.8539% ( 14) 00:09:32.243 8973.391 - 9023.803: 96.9172% ( 12) 00:09:32.243 9023.803 - 9074.215: 96.9595% ( 8) 00:09:32.243 9074.215 - 9124.628: 97.0017% ( 8) 00:09:32.243 9124.628 - 9175.040: 97.0386% ( 7) 00:09:32.243 9175.040 - 9225.452: 97.0756% ( 7) 00:09:32.243 9225.452 - 9275.865: 97.1178% ( 8) 00:09:32.243 9275.865 - 9326.277: 97.1495% ( 6) 00:09:32.243 9326.277 - 9376.689: 97.1706% ( 4) 00:09:32.243 9376.689 - 9427.102: 97.1812% ( 2) 00:09:32.243 9427.102 - 9477.514: 97.1970% ( 3) 00:09:32.243 9477.514 - 9527.926: 97.2076% ( 2) 00:09:32.243 9527.926 - 9578.338: 97.2128% ( 1) 00:09:32.243 9578.338 - 9628.751: 97.2287% ( 3) 00:09:32.243 9628.751 - 9679.163: 97.2392% ( 2) 00:09:32.243 9679.163 - 9729.575: 97.2656% ( 5) 00:09:32.243 9729.575 - 9779.988: 97.3026% ( 7) 00:09:32.243 9779.988 - 9830.400: 97.3448% ( 8) 00:09:32.243 9830.400 - 9880.812: 97.3818% ( 7) 00:09:32.243 9880.812 - 9931.225: 97.4345% ( 10) 00:09:32.243 9931.225 - 9981.637: 97.4821% ( 9) 00:09:32.243 9981.637 - 10032.049: 97.5560% ( 14) 00:09:32.243 10032.049 - 10082.462: 97.6351% ( 15) 00:09:32.243 10082.462 - 10132.874: 97.7302% ( 18) 00:09:32.243 10132.874 - 10183.286: 97.7882% ( 11) 00:09:32.243 10183.286 - 10233.698: 97.8463% ( 11) 00:09:32.243 10233.698 - 10284.111: 97.9413% ( 18) 00:09:32.243 10284.111 - 10334.523: 97.9835% ( 8) 00:09:32.243 10334.523 - 10384.935: 98.0099% ( 5) 00:09:32.243 10384.935 - 10435.348: 98.0469% ( 7) 00:09:32.243 10435.348 - 10485.760: 98.0733% ( 5) 00:09:32.243 10485.760 - 10536.172: 98.1102% ( 7) 00:09:32.243 10536.172 - 10586.585: 98.1419% ( 6) 00:09:32.243 10586.585 - 10636.997: 98.1788% ( 7) 00:09:32.243 10636.997 - 10687.409: 98.2052% ( 5) 00:09:32.243 10687.409 - 10737.822: 98.2422% ( 7) 00:09:32.243 10737.822 - 10788.234: 98.2686% ( 5) 00:09:32.243 10788.234 - 10838.646: 98.3055% ( 7) 00:09:32.243 10838.646 - 10889.058: 98.3319% ( 5) 00:09:32.243 10889.058 - 10939.471: 98.3636% ( 6) 00:09:32.243 10939.471 - 10989.883: 98.3900% ( 5) 00:09:32.243 10989.883 - 11040.295: 98.4164% ( 5) 00:09:32.243 11040.295 - 11090.708: 98.4481% ( 6) 00:09:32.243 11090.708 - 11141.120: 98.4745% ( 5) 00:09:32.243 11141.120 - 11191.532: 98.5008% ( 5) 00:09:32.243 11191.532 - 11241.945: 98.5220% ( 4) 00:09:32.243 11241.945 - 11292.357: 98.5431% ( 4) 00:09:32.243 11292.357 - 11342.769: 98.5642% ( 4) 00:09:32.243 11342.769 - 11393.182: 98.5853% ( 4) 00:09:32.243 11393.182 - 11443.594: 98.6011% ( 3) 00:09:32.243 11443.594 - 11494.006: 98.6223% ( 4) 00:09:32.243 11494.006 - 11544.418: 98.6434% ( 4) 00:09:32.243 11544.418 - 11594.831: 98.6486% ( 1) 00:09:32.243 12703.902 - 12754.314: 98.6592% ( 2) 00:09:32.243 12754.314 - 12804.726: 98.6856% ( 5) 00:09:32.243 12804.726 - 12855.138: 98.7067% ( 4) 00:09:32.243 12855.138 - 12905.551: 98.7331% ( 5) 00:09:32.243 12905.551 - 13006.375: 98.7806% ( 9) 00:09:32.243 13006.375 - 13107.200: 98.9918% ( 40) 00:09:32.243 13107.200 - 13208.025: 99.0129% ( 4) 00:09:32.243 13208.025 - 13308.849: 99.0340% ( 4) 00:09:32.243 13308.849 - 13409.674: 99.0551% ( 4) 00:09:32.243 13409.674 - 13510.498: 99.0709% ( 3) 00:09:32.243 13510.498 - 13611.323: 99.0868% ( 3) 00:09:32.243 13611.323 - 13712.148: 99.1079% ( 4) 00:09:32.243 13712.148 - 13812.972: 99.1290% ( 4) 00:09:32.243 13812.972 - 13913.797: 99.1448% ( 3) 00:09:32.243 13913.797 - 14014.622: 99.1660% ( 4) 00:09:32.243 14014.622 - 14115.446: 99.1871% ( 4) 00:09:32.243 14115.446 - 14216.271: 99.2029% ( 3) 00:09:32.244 14216.271 - 14317.095: 99.2240% ( 4) 00:09:32.244 14317.095 - 14417.920: 99.2451% ( 4) 00:09:32.244 14417.920 - 14518.745: 99.2610% ( 3) 00:09:32.244 14518.745 - 14619.569: 99.2821% ( 4) 00:09:32.244 14619.569 - 14720.394: 99.2979% ( 3) 00:09:32.244 14720.394 - 14821.218: 99.3190% ( 4) 00:09:32.244 14821.218 - 14922.043: 99.3243% ( 1) 00:09:32.244 21878.942 - 21979.766: 99.3454% ( 4) 00:09:32.244 21979.766 - 22080.591: 99.3666% ( 4) 00:09:32.244 22080.591 - 22181.415: 99.3877% ( 4) 00:09:32.244 22181.415 - 22282.240: 99.4035% ( 3) 00:09:32.244 22282.240 - 22383.065: 99.4246% ( 4) 00:09:32.244 22383.065 - 22483.889: 99.4457% ( 4) 00:09:32.244 22483.889 - 22584.714: 99.4616% ( 3) 00:09:32.244 22584.714 - 22685.538: 99.4827% ( 4) 00:09:32.244 22685.538 - 22786.363: 99.5038% ( 4) 00:09:32.244 22786.363 - 22887.188: 99.5249% ( 4) 00:09:32.244 22887.188 - 22988.012: 99.5460% ( 4) 00:09:32.244 22988.012 - 23088.837: 99.5671% ( 4) 00:09:32.244 23088.837 - 23189.662: 99.5883% ( 4) 00:09:32.244 23189.662 - 23290.486: 99.6094% ( 4) 00:09:32.244 23290.486 - 23391.311: 99.6305% ( 4) 00:09:32.244 23391.311 - 23492.135: 99.6516% ( 4) 00:09:32.244 23492.135 - 23592.960: 99.6727% ( 4) 00:09:32.244 23592.960 - 23693.785: 99.6886% ( 3) 00:09:32.244 23693.785 - 23794.609: 99.7097% ( 4) 00:09:32.244 23794.609 - 23895.434: 99.7308% ( 4) 00:09:32.244 23895.434 - 23996.258: 99.7519% ( 4) 00:09:32.244 23996.258 - 24097.083: 99.7730% ( 4) 00:09:32.244 24097.083 - 24197.908: 99.7941% ( 4) 00:09:32.244 24197.908 - 24298.732: 99.8152% ( 4) 00:09:32.244 24298.732 - 24399.557: 99.8364% ( 4) 00:09:32.244 24399.557 - 24500.382: 99.8575% ( 4) 00:09:32.244 24500.382 - 24601.206: 99.8786% ( 4) 00:09:32.244 24601.206 - 24702.031: 99.8997% ( 4) 00:09:32.244 24702.031 - 24802.855: 99.9208% ( 4) 00:09:32.244 24802.855 - 24903.680: 99.9367% ( 3) 00:09:32.244 24903.680 - 25004.505: 99.9578% ( 4) 00:09:32.244 25004.505 - 25105.329: 99.9789% ( 4) 00:09:32.244 25105.329 - 25206.154: 100.0000% ( 4) 00:09:32.244 00:09:32.244 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:32.244 ============================================================================== 00:09:32.244 Range in us Cumulative IO count 00:09:32.244 5268.086 - 5293.292: 0.0052% ( 1) 00:09:32.244 5343.705 - 5368.911: 0.0105% ( 1) 00:09:32.244 5368.911 - 5394.117: 0.0157% ( 1) 00:09:32.244 5394.117 - 5419.323: 0.0210% ( 1) 00:09:32.244 5444.529 - 5469.735: 0.0734% ( 10) 00:09:32.244 5469.735 - 5494.942: 0.1258% ( 10) 00:09:32.244 5494.942 - 5520.148: 0.1835% ( 11) 00:09:32.244 5520.148 - 5545.354: 0.2255% ( 8) 00:09:32.244 5545.354 - 5570.560: 0.2989% ( 14) 00:09:32.244 5570.560 - 5595.766: 0.3880% ( 17) 00:09:32.244 5595.766 - 5620.972: 0.4929% ( 20) 00:09:32.244 5620.972 - 5646.178: 0.6187% ( 24) 00:09:32.244 5646.178 - 5671.385: 0.7341% ( 22) 00:09:32.244 5671.385 - 5696.591: 0.8651% ( 25) 00:09:32.244 5696.591 - 5721.797: 1.0591% ( 37) 00:09:32.244 5721.797 - 5747.003: 1.3423% ( 54) 00:09:32.244 5747.003 - 5772.209: 1.6149% ( 52) 00:09:32.244 5772.209 - 5797.415: 1.9295% ( 60) 00:09:32.244 5797.415 - 5822.622: 2.2127% ( 54) 00:09:32.244 5822.622 - 5847.828: 2.5482% ( 64) 00:09:32.244 5847.828 - 5873.034: 3.0254% ( 91) 00:09:32.244 5873.034 - 5898.240: 3.4291% ( 77) 00:09:32.244 5898.240 - 5923.446: 4.0216% ( 113) 00:09:32.244 5923.446 - 5948.652: 4.6718% ( 124) 00:09:32.244 5948.652 - 5973.858: 5.5684% ( 171) 00:09:32.244 5973.858 - 5999.065: 6.6013% ( 197) 00:09:32.244 5999.065 - 6024.271: 7.8911% ( 246) 00:09:32.244 6024.271 - 6049.477: 9.3016% ( 269) 00:09:32.244 6049.477 - 6074.683: 10.5967% ( 247) 00:09:32.244 6074.683 - 6099.889: 12.1120% ( 289) 00:09:32.244 6099.889 - 6125.095: 14.0048% ( 361) 00:09:32.244 6125.095 - 6150.302: 16.1021% ( 400) 00:09:32.244 6150.302 - 6175.508: 18.1575% ( 392) 00:09:32.244 6175.508 - 6200.714: 19.9927% ( 350) 00:09:32.244 6200.714 - 6225.920: 21.9851% ( 380) 00:09:32.244 6225.920 - 6251.126: 24.2030% ( 423) 00:09:32.244 6251.126 - 6276.332: 26.2741% ( 395) 00:09:32.244 6276.332 - 6301.538: 28.3085% ( 388) 00:09:32.244 6301.538 - 6326.745: 31.3129% ( 573) 00:09:32.244 6326.745 - 6351.951: 34.7158% ( 649) 00:09:32.244 6351.951 - 6377.157: 38.3180% ( 687) 00:09:32.244 6377.157 - 6402.363: 41.2594% ( 561) 00:09:32.244 6402.363 - 6427.569: 44.1747% ( 556) 00:09:32.244 6427.569 - 6452.775: 46.5814% ( 459) 00:09:32.244 6452.775 - 6503.188: 52.5115% ( 1131) 00:09:32.244 6503.188 - 6553.600: 58.6357% ( 1168) 00:09:32.244 6553.600 - 6604.012: 63.3232% ( 894) 00:09:32.244 6604.012 - 6654.425: 67.6804% ( 831) 00:09:32.244 6654.425 - 6704.837: 71.0151% ( 636) 00:09:32.244 6704.837 - 6755.249: 74.0405% ( 577) 00:09:32.244 6755.249 - 6805.662: 76.5520% ( 479) 00:09:32.244 6805.662 - 6856.074: 78.6703% ( 404) 00:09:32.244 6856.074 - 6906.486: 81.0822% ( 460) 00:09:32.244 6906.486 - 6956.898: 82.9331% ( 353) 00:09:32.244 6956.898 - 7007.311: 84.8259% ( 361) 00:09:32.244 7007.311 - 7057.723: 86.6768% ( 353) 00:09:32.244 7057.723 - 7108.135: 88.1397% ( 279) 00:09:32.244 7108.135 - 7158.548: 89.4348% ( 247) 00:09:32.244 7158.548 - 7208.960: 90.4205% ( 188) 00:09:32.244 7208.960 - 7259.372: 91.0917% ( 128) 00:09:32.244 7259.372 - 7309.785: 91.5898% ( 95) 00:09:32.244 7309.785 - 7360.197: 91.9253% ( 64) 00:09:32.244 7360.197 - 7410.609: 92.1770% ( 48) 00:09:32.244 7410.609 - 7461.022: 92.3710% ( 37) 00:09:32.244 7461.022 - 7511.434: 92.4916% ( 23) 00:09:32.244 7511.434 - 7561.846: 92.6017% ( 21) 00:09:32.244 7561.846 - 7612.258: 92.7066% ( 20) 00:09:32.244 7612.258 - 7662.671: 92.8324% ( 24) 00:09:32.244 7662.671 - 7713.083: 92.9478% ( 22) 00:09:32.244 7713.083 - 7763.495: 93.0579% ( 21) 00:09:32.244 7763.495 - 7813.908: 93.1575% ( 19) 00:09:32.244 7813.908 - 7864.320: 93.2833% ( 24) 00:09:32.244 7864.320 - 7914.732: 93.4144% ( 25) 00:09:32.244 7914.732 - 7965.145: 93.6032% ( 36) 00:09:32.244 7965.145 - 8015.557: 93.7972% ( 37) 00:09:32.244 8015.557 - 8065.969: 94.0017% ( 39) 00:09:32.244 8065.969 - 8116.382: 94.3005% ( 57) 00:09:32.244 8116.382 - 8166.794: 94.6309% ( 63) 00:09:32.244 8166.794 - 8217.206: 94.8196% ( 36) 00:09:32.244 8217.206 - 8267.618: 95.0556% ( 45) 00:09:32.244 8267.618 - 8318.031: 95.2129% ( 30) 00:09:32.244 8318.031 - 8368.443: 95.3335% ( 23) 00:09:32.244 8368.443 - 8418.855: 95.5013% ( 32) 00:09:32.244 8418.855 - 8469.268: 95.6061% ( 20) 00:09:32.244 8469.268 - 8519.680: 95.7424% ( 26) 00:09:32.244 8519.680 - 8570.092: 95.8788% ( 26) 00:09:32.244 8570.092 - 8620.505: 96.1252% ( 47) 00:09:32.244 8620.505 - 8670.917: 96.3245% ( 38) 00:09:32.244 8670.917 - 8721.329: 96.4398% ( 22) 00:09:32.244 8721.329 - 8771.742: 96.5709% ( 25) 00:09:32.244 8771.742 - 8822.154: 96.6810% ( 21) 00:09:32.244 8822.154 - 8872.566: 96.8068% ( 24) 00:09:32.244 8872.566 - 8922.978: 96.9536% ( 28) 00:09:32.244 8922.978 - 8973.391: 97.0847% ( 25) 00:09:32.244 8973.391 - 9023.803: 97.2053% ( 23) 00:09:32.244 9023.803 - 9074.215: 97.3626% ( 30) 00:09:32.244 9074.215 - 9124.628: 97.4990% ( 26) 00:09:32.244 9124.628 - 9175.040: 97.5986% ( 19) 00:09:32.244 9175.040 - 9225.452: 97.7034% ( 20) 00:09:32.244 9225.452 - 9275.865: 97.7873% ( 16) 00:09:32.244 9275.865 - 9326.277: 97.8503% ( 12) 00:09:32.244 9326.277 - 9376.689: 97.9027% ( 10) 00:09:32.244 9376.689 - 9427.102: 97.9604% ( 11) 00:09:32.244 9427.102 - 9477.514: 97.9971% ( 7) 00:09:32.244 9477.514 - 9527.926: 98.0495% ( 10) 00:09:32.244 9527.926 - 9578.338: 98.0967% ( 9) 00:09:32.244 9578.338 - 9628.751: 98.1386% ( 8) 00:09:32.244 9628.751 - 9679.163: 98.1753% ( 7) 00:09:32.244 9679.163 - 9729.575: 98.2120% ( 7) 00:09:32.244 9729.575 - 9779.988: 98.2383% ( 5) 00:09:32.244 9779.988 - 9830.400: 98.2697% ( 6) 00:09:32.244 9830.400 - 9880.812: 98.2907% ( 4) 00:09:32.244 9880.812 - 9931.225: 98.3221% ( 6) 00:09:32.244 9931.225 - 9981.637: 98.3431% ( 4) 00:09:32.245 9981.637 - 10032.049: 98.3693% ( 5) 00:09:32.245 10032.049 - 10082.462: 98.3903% ( 4) 00:09:32.245 10082.462 - 10132.874: 98.4165% ( 5) 00:09:32.245 10132.874 - 10183.286: 98.4427% ( 5) 00:09:32.245 10183.286 - 10233.698: 98.4690% ( 5) 00:09:32.245 10233.698 - 10284.111: 98.4952% ( 5) 00:09:32.245 10284.111 - 10334.523: 98.5161% ( 4) 00:09:32.245 10334.523 - 10384.935: 98.5476% ( 6) 00:09:32.245 10384.935 - 10435.348: 98.5686% ( 4) 00:09:32.245 10435.348 - 10485.760: 98.6000% ( 6) 00:09:32.245 10485.760 - 10536.172: 98.6158% ( 3) 00:09:32.245 10536.172 - 10586.585: 98.6367% ( 4) 00:09:32.245 10586.585 - 10636.997: 98.6577% ( 4) 00:09:32.245 11191.532 - 11241.945: 98.6630% ( 1) 00:09:32.245 11443.594 - 11494.006: 98.6682% ( 1) 00:09:32.245 11494.006 - 11544.418: 98.6997% ( 6) 00:09:32.245 11544.418 - 11594.831: 98.7102% ( 2) 00:09:32.245 11594.831 - 11645.243: 98.7259% ( 3) 00:09:32.245 11645.243 - 11695.655: 98.7364% ( 2) 00:09:32.245 11695.655 - 11746.068: 98.7521% ( 3) 00:09:32.245 11746.068 - 11796.480: 98.8412% ( 17) 00:09:32.245 11796.480 - 11846.892: 98.9356% ( 18) 00:09:32.245 11846.892 - 11897.305: 98.9461% ( 2) 00:09:32.245 11897.305 - 11947.717: 98.9618% ( 3) 00:09:32.245 11947.717 - 11998.129: 98.9933% ( 6) 00:09:32.245 11998.129 - 12048.542: 99.0143% ( 4) 00:09:32.245 12048.542 - 12098.954: 99.0247% ( 2) 00:09:32.245 12098.954 - 12149.366: 99.0352% ( 2) 00:09:32.245 12149.366 - 12199.778: 99.0457% ( 2) 00:09:32.245 12199.778 - 12250.191: 99.0562% ( 2) 00:09:32.245 12250.191 - 12300.603: 99.0615% ( 1) 00:09:32.245 12300.603 - 12351.015: 99.0719% ( 2) 00:09:32.245 12351.015 - 12401.428: 99.0824% ( 2) 00:09:32.245 12401.428 - 12451.840: 99.0929% ( 2) 00:09:32.245 12451.840 - 12502.252: 99.1034% ( 2) 00:09:32.245 12502.252 - 12552.665: 99.1139% ( 2) 00:09:32.245 12552.665 - 12603.077: 99.1244% ( 2) 00:09:32.245 12603.077 - 12653.489: 99.1296% ( 1) 00:09:32.245 12653.489 - 12703.902: 99.1401% ( 2) 00:09:32.245 12703.902 - 12754.314: 99.1506% ( 2) 00:09:32.245 12754.314 - 12804.726: 99.1611% ( 2) 00:09:32.245 12804.726 - 12855.138: 99.1663% ( 1) 00:09:32.245 12855.138 - 12905.551: 99.1768% ( 2) 00:09:32.245 12905.551 - 13006.375: 99.1978% ( 4) 00:09:32.245 13006.375 - 13107.200: 99.2135% ( 3) 00:09:32.245 13107.200 - 13208.025: 99.2345% ( 4) 00:09:32.245 13208.025 - 13308.849: 99.2502% ( 3) 00:09:32.245 13308.849 - 13409.674: 99.2712% ( 4) 00:09:32.245 13409.674 - 13510.498: 99.2922% ( 4) 00:09:32.245 13510.498 - 13611.323: 99.3131% ( 4) 00:09:32.245 13611.323 - 13712.148: 99.3289% ( 3) 00:09:32.245 14417.920 - 14518.745: 99.3498% ( 4) 00:09:32.245 14518.745 - 14619.569: 99.3708% ( 4) 00:09:32.245 14619.569 - 14720.394: 99.3918% ( 4) 00:09:32.245 14720.394 - 14821.218: 99.4128% ( 4) 00:09:32.245 14821.218 - 14922.043: 99.4337% ( 4) 00:09:32.245 14922.043 - 15022.868: 99.4599% ( 5) 00:09:32.245 15022.868 - 15123.692: 99.4809% ( 4) 00:09:32.245 15123.692 - 15224.517: 99.5019% ( 4) 00:09:32.245 15224.517 - 15325.342: 99.5176% ( 3) 00:09:32.245 15325.342 - 15426.166: 99.5386% ( 4) 00:09:32.245 15426.166 - 15526.991: 99.5596% ( 4) 00:09:32.245 15526.991 - 15627.815: 99.5858% ( 5) 00:09:32.245 15627.815 - 15728.640: 99.6068% ( 4) 00:09:32.245 15728.640 - 15829.465: 99.6277% ( 4) 00:09:32.245 15829.465 - 15930.289: 99.6487% ( 4) 00:09:32.245 15930.289 - 16031.114: 99.6697% ( 4) 00:09:32.245 16031.114 - 16131.938: 99.6959% ( 5) 00:09:32.245 16131.938 - 16232.763: 99.7116% ( 3) 00:09:32.245 16232.763 - 16333.588: 99.7378% ( 5) 00:09:32.245 16333.588 - 16434.412: 99.7588% ( 4) 00:09:32.245 16434.412 - 16535.237: 99.7798% ( 4) 00:09:32.245 16535.237 - 16636.062: 99.8008% ( 4) 00:09:32.245 16636.062 - 16736.886: 99.8270% ( 5) 00:09:32.245 16736.886 - 16837.711: 99.8479% ( 4) 00:09:32.245 16837.711 - 16938.535: 99.8689% ( 4) 00:09:32.245 16938.535 - 17039.360: 99.8899% ( 4) 00:09:32.245 17039.360 - 17140.185: 99.9109% ( 4) 00:09:32.245 17140.185 - 17241.009: 99.9371% ( 5) 00:09:32.245 17241.009 - 17341.834: 99.9581% ( 4) 00:09:32.245 17341.834 - 17442.658: 99.9790% ( 4) 00:09:32.245 17442.658 - 17543.483: 100.0000% ( 4) 00:09:32.245 00:09:32.245 23:43:02 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:32.245 00:09:32.245 real 0m2.617s 00:09:32.245 user 0m2.315s 00:09:32.245 sys 0m0.202s 00:09:32.245 23:43:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.245 23:43:02 -- common/autotest_common.sh@10 -- # set +x 00:09:32.245 ************************************ 00:09:32.245 END TEST nvme_perf 00:09:32.245 ************************************ 00:09:32.245 23:43:02 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:32.245 23:43:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:32.245 23:43:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.245 23:43:02 -- common/autotest_common.sh@10 -- # set +x 00:09:32.245 ************************************ 00:09:32.245 START TEST nvme_hello_world 00:09:32.245 ************************************ 00:09:32.245 23:43:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:32.245 Initializing NVMe Controllers 00:09:32.245 Attached to 0000:00:06.0 00:09:32.245 Namespace ID: 1 size: 6GB 00:09:32.245 Attached to 0000:00:07.0 00:09:32.245 Namespace ID: 1 size: 5GB 00:09:32.245 Attached to 0000:00:09.0 00:09:32.245 Namespace ID: 1 size: 1GB 00:09:32.245 Attached to 0000:00:08.0 00:09:32.245 Namespace ID: 1 size: 4GB 00:09:32.245 Namespace ID: 2 size: 4GB 00:09:32.245 Namespace ID: 3 size: 4GB 00:09:32.245 Initialization complete. 00:09:32.245 INFO: using host memory buffer for IO 00:09:32.245 Hello world! 00:09:32.245 INFO: using host memory buffer for IO 00:09:32.245 Hello world! 00:09:32.245 INFO: using host memory buffer for IO 00:09:32.245 Hello world! 00:09:32.245 INFO: using host memory buffer for IO 00:09:32.245 Hello world! 00:09:32.245 INFO: using host memory buffer for IO 00:09:32.245 Hello world! 00:09:32.245 INFO: using host memory buffer for IO 00:09:32.245 Hello world! 00:09:32.245 00:09:32.245 real 0m0.265s 00:09:32.245 user 0m0.116s 00:09:32.245 sys 0m0.100s 00:09:32.245 23:43:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.245 23:43:02 -- common/autotest_common.sh@10 -- # set +x 00:09:32.245 ************************************ 00:09:32.245 END TEST nvme_hello_world 00:09:32.245 ************************************ 00:09:32.245 23:43:02 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:32.245 23:43:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.245 23:43:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.245 23:43:02 -- common/autotest_common.sh@10 -- # set +x 00:09:32.245 ************************************ 00:09:32.245 START TEST nvme_sgl 00:09:32.245 ************************************ 00:09:32.245 23:43:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:32.504 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:32.504 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:32.504 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:32.504 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:32.504 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:32.504 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:32.504 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:32.504 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:32.504 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:32.762 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:32.762 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:32.762 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:32.762 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:32.762 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:32.763 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:32.763 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:32.763 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:32.763 NVMe Readv/Writev Request test 00:09:32.763 Attached to 0000:00:06.0 00:09:32.763 Attached to 0000:00:07.0 00:09:32.763 Attached to 0000:00:09.0 00:09:32.763 Attached to 0000:00:08.0 00:09:32.763 0000:00:06.0: build_io_request_2 test passed 00:09:32.763 0000:00:06.0: build_io_request_4 test passed 00:09:32.763 0000:00:06.0: build_io_request_5 test passed 00:09:32.763 0000:00:06.0: build_io_request_6 test passed 00:09:32.763 0000:00:06.0: build_io_request_7 test passed 00:09:32.763 0000:00:06.0: build_io_request_10 test passed 00:09:32.763 0000:00:07.0: build_io_request_2 test passed 00:09:32.763 0000:00:07.0: build_io_request_4 test passed 00:09:32.763 0000:00:07.0: build_io_request_5 test passed 00:09:32.763 0000:00:07.0: build_io_request_6 test passed 00:09:32.763 0000:00:07.0: build_io_request_7 test passed 00:09:32.763 0000:00:07.0: build_io_request_10 test passed 00:09:32.763 Cleaning up... 00:09:32.763 00:09:32.763 real 0m0.367s 00:09:32.763 user 0m0.224s 00:09:32.763 sys 0m0.100s 00:09:32.763 23:43:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.763 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:09:32.763 ************************************ 00:09:32.763 END TEST nvme_sgl 00:09:32.763 ************************************ 00:09:32.763 23:43:03 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:32.763 23:43:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.763 23:43:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.763 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:09:32.763 ************************************ 00:09:32.763 START TEST nvme_e2edp 00:09:32.763 ************************************ 00:09:32.763 23:43:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:33.021 NVMe Write/Read with End-to-End data protection test 00:09:33.021 Attached to 0000:00:06.0 00:09:33.021 Attached to 0000:00:07.0 00:09:33.021 Attached to 0000:00:09.0 00:09:33.021 Attached to 0000:00:08.0 00:09:33.021 Cleaning up... 00:09:33.021 00:09:33.021 real 0m0.197s 00:09:33.021 user 0m0.055s 00:09:33.021 sys 0m0.098s 00:09:33.021 23:43:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.021 ************************************ 00:09:33.021 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.021 END TEST nvme_e2edp 00:09:33.021 ************************************ 00:09:33.021 23:43:03 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:33.021 23:43:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.021 23:43:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.021 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.021 ************************************ 00:09:33.021 START TEST nvme_reserve 00:09:33.021 ************************************ 00:09:33.021 23:43:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:33.280 ===================================================== 00:09:33.280 NVMe Controller at PCI bus 0, device 6, function 0 00:09:33.280 ===================================================== 00:09:33.280 Reservations: Not Supported 00:09:33.280 ===================================================== 00:09:33.280 NVMe Controller at PCI bus 0, device 7, function 0 00:09:33.280 ===================================================== 00:09:33.280 Reservations: Not Supported 00:09:33.280 ===================================================== 00:09:33.280 NVMe Controller at PCI bus 0, device 9, function 0 00:09:33.280 ===================================================== 00:09:33.280 Reservations: Not Supported 00:09:33.280 ===================================================== 00:09:33.280 NVMe Controller at PCI bus 0, device 8, function 0 00:09:33.280 ===================================================== 00:09:33.280 Reservations: Not Supported 00:09:33.280 Reservation test passed 00:09:33.280 00:09:33.280 real 0m0.197s 00:09:33.280 user 0m0.058s 00:09:33.280 sys 0m0.098s 00:09:33.280 23:43:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.280 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.280 ************************************ 00:09:33.280 END TEST nvme_reserve 00:09:33.280 ************************************ 00:09:33.280 23:43:03 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:33.280 23:43:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.280 23:43:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.280 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.280 ************************************ 00:09:33.280 START TEST nvme_err_injection 00:09:33.280 ************************************ 00:09:33.280 23:43:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:33.539 NVMe Error Injection test 00:09:33.539 Attached to 0000:00:06.0 00:09:33.539 Attached to 0000:00:07.0 00:09:33.539 Attached to 0000:00:09.0 00:09:33.539 Attached to 0000:00:08.0 00:09:33.539 0000:00:08.0: get features failed as expected 00:09:33.539 0000:00:06.0: get features failed as expected 00:09:33.539 0000:00:07.0: get features failed as expected 00:09:33.539 0000:00:09.0: get features failed as expected 00:09:33.539 0000:00:06.0: get features successfully as expected 00:09:33.539 0000:00:07.0: get features successfully as expected 00:09:33.539 0000:00:09.0: get features successfully as expected 00:09:33.539 0000:00:08.0: get features successfully as expected 00:09:33.539 0000:00:06.0: read failed as expected 00:09:33.539 0000:00:08.0: read failed as expected 00:09:33.539 0000:00:07.0: read failed as expected 00:09:33.539 0000:00:09.0: read failed as expected 00:09:33.539 0000:00:07.0: read successfully as expected 00:09:33.539 0000:00:09.0: read successfully as expected 00:09:33.539 0000:00:08.0: read successfully as expected 00:09:33.539 0000:00:06.0: read successfully as expected 00:09:33.539 Cleaning up... 00:09:33.539 00:09:33.539 real 0m0.251s 00:09:33.539 user 0m0.115s 00:09:33.539 sys 0m0.093s 00:09:33.539 23:43:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.539 ************************************ 00:09:33.539 END TEST nvme_err_injection 00:09:33.539 23:43:04 -- common/autotest_common.sh@10 -- # set +x 00:09:33.539 ************************************ 00:09:33.539 23:43:04 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:33.539 23:43:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:33.539 23:43:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.539 23:43:04 -- common/autotest_common.sh@10 -- # set +x 00:09:33.539 ************************************ 00:09:33.539 START TEST nvme_overhead 00:09:33.539 ************************************ 00:09:33.539 23:43:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:34.915 Initializing NVMe Controllers 00:09:34.915 Attached to 0000:00:06.0 00:09:34.915 Attached to 0000:00:07.0 00:09:34.915 Attached to 0000:00:09.0 00:09:34.915 Attached to 0000:00:08.0 00:09:34.915 Initialization complete. Launching workers. 00:09:34.915 submit (in ns) avg, min, max = 11455.6, 9976.2, 254899.2 00:09:34.915 complete (in ns) avg, min, max = 7575.2, 7193.8, 282915.4 00:09:34.915 00:09:34.915 Submit histogram 00:09:34.915 ================ 00:09:34.915 Range in us Cumulative Count 00:09:34.915 9.945 - 9.994: 0.0058% ( 1) 00:09:34.915 10.142 - 10.191: 0.0116% ( 1) 00:09:34.915 10.191 - 10.240: 0.0175% ( 1) 00:09:34.915 10.240 - 10.289: 0.0233% ( 1) 00:09:34.915 10.289 - 10.338: 0.0349% ( 2) 00:09:34.915 10.437 - 10.486: 0.0408% ( 1) 00:09:34.915 10.535 - 10.585: 0.0466% ( 1) 00:09:34.915 10.782 - 10.831: 0.0524% ( 1) 00:09:34.915 10.831 - 10.880: 0.1456% ( 16) 00:09:34.915 10.880 - 10.929: 1.1704% ( 176) 00:09:34.915 10.929 - 10.978: 5.7998% ( 795) 00:09:34.915 10.978 - 11.028: 16.3221% ( 1807) 00:09:34.915 11.028 - 11.077: 31.7184% ( 2644) 00:09:34.915 11.077 - 11.126: 46.9749% ( 2620) 00:09:34.915 11.126 - 11.175: 59.6634% ( 2179) 00:09:34.915 11.175 - 11.225: 67.5304% ( 1351) 00:09:34.915 11.225 - 11.274: 72.5325% ( 859) 00:09:34.915 11.274 - 11.323: 75.2169% ( 461) 00:09:34.915 11.323 - 11.372: 77.0570% ( 316) 00:09:34.915 11.372 - 11.422: 78.2158% ( 199) 00:09:34.915 11.422 - 11.471: 79.1883% ( 167) 00:09:34.915 11.471 - 11.520: 79.8463% ( 113) 00:09:34.915 11.520 - 11.569: 80.7139% ( 149) 00:09:34.915 11.569 - 11.618: 81.5291% ( 140) 00:09:34.915 11.618 - 11.668: 82.3502% ( 141) 00:09:34.915 11.668 - 11.717: 83.0723% ( 124) 00:09:34.915 11.717 - 11.766: 83.7012% ( 108) 00:09:34.915 11.766 - 11.815: 84.2602% ( 96) 00:09:34.915 11.815 - 11.865: 84.8308% ( 98) 00:09:34.915 11.865 - 11.914: 85.2734% ( 76) 00:09:34.915 11.914 - 11.963: 85.9722% ( 120) 00:09:34.915 11.963 - 12.012: 86.8107% ( 144) 00:09:34.915 12.012 - 12.062: 88.2490% ( 247) 00:09:34.915 12.062 - 12.111: 89.9843% ( 298) 00:09:34.915 12.111 - 12.160: 91.5856% ( 275) 00:09:34.915 12.160 - 12.209: 93.0822% ( 257) 00:09:34.915 12.209 - 12.258: 94.2060% ( 193) 00:09:34.915 12.258 - 12.308: 95.0620% ( 147) 00:09:34.915 12.308 - 12.357: 95.6851% ( 107) 00:09:34.915 12.357 - 12.406: 96.1451% ( 79) 00:09:34.915 12.406 - 12.455: 96.3897% ( 42) 00:09:34.915 12.455 - 12.505: 96.5877% ( 34) 00:09:34.915 12.505 - 12.554: 96.6808% ( 16) 00:09:34.915 12.554 - 12.603: 96.7973% ( 20) 00:09:34.915 12.603 - 12.702: 96.8846% ( 15) 00:09:34.915 12.702 - 12.800: 96.9487% ( 11) 00:09:34.915 12.800 - 12.898: 96.9953% ( 8) 00:09:34.915 12.898 - 12.997: 97.0826% ( 15) 00:09:34.915 12.997 - 13.095: 97.1700% ( 15) 00:09:34.915 13.095 - 13.194: 97.3097% ( 24) 00:09:34.915 13.194 - 13.292: 97.4087% ( 17) 00:09:34.915 13.292 - 13.391: 97.5135% ( 18) 00:09:34.915 13.391 - 13.489: 97.6009% ( 15) 00:09:34.915 13.489 - 13.588: 97.6941% ( 16) 00:09:34.915 13.588 - 13.686: 97.7406% ( 8) 00:09:34.915 13.686 - 13.785: 97.7756% ( 6) 00:09:34.915 13.785 - 13.883: 97.8047% ( 5) 00:09:34.915 13.883 - 13.982: 97.8455% ( 7) 00:09:34.915 13.982 - 14.080: 97.8804% ( 6) 00:09:34.915 14.080 - 14.178: 97.9037% ( 4) 00:09:34.915 14.178 - 14.277: 97.9444% ( 7) 00:09:34.915 14.277 - 14.375: 97.9503% ( 1) 00:09:34.915 14.375 - 14.474: 98.0027% ( 9) 00:09:34.915 14.474 - 14.572: 98.0434% ( 7) 00:09:34.915 14.572 - 14.671: 98.0667% ( 4) 00:09:34.915 14.671 - 14.769: 98.1075% ( 7) 00:09:34.915 14.769 - 14.868: 98.1541% ( 8) 00:09:34.915 14.868 - 14.966: 98.2007% ( 8) 00:09:34.915 14.966 - 15.065: 98.2414% ( 7) 00:09:34.915 15.065 - 15.163: 98.2705% ( 5) 00:09:34.915 15.163 - 15.262: 98.2938% ( 4) 00:09:34.915 15.262 - 15.360: 98.3171% ( 4) 00:09:34.915 15.360 - 15.458: 98.3462% ( 5) 00:09:34.916 15.458 - 15.557: 98.3579% ( 2) 00:09:34.916 15.557 - 15.655: 98.3812% ( 4) 00:09:34.916 15.655 - 15.754: 98.3986% ( 3) 00:09:34.916 15.754 - 15.852: 98.4045% ( 1) 00:09:34.916 15.852 - 15.951: 98.4278% ( 4) 00:09:34.916 15.951 - 16.049: 98.4452% ( 3) 00:09:34.916 16.049 - 16.148: 98.4511% ( 1) 00:09:34.916 16.148 - 16.246: 98.4627% ( 2) 00:09:34.916 16.246 - 16.345: 98.4743% ( 2) 00:09:34.916 16.345 - 16.443: 98.4976% ( 4) 00:09:34.916 16.443 - 16.542: 98.5384% ( 7) 00:09:34.916 16.542 - 16.640: 98.6199% ( 14) 00:09:34.916 16.640 - 16.738: 98.6782% ( 10) 00:09:34.916 16.738 - 16.837: 98.7597% ( 14) 00:09:34.916 16.837 - 16.935: 98.8470% ( 15) 00:09:34.916 16.935 - 17.034: 98.9111% ( 11) 00:09:34.916 17.034 - 17.132: 98.9751% ( 11) 00:09:34.916 17.132 - 17.231: 99.0450% ( 12) 00:09:34.916 17.231 - 17.329: 99.1324% ( 15) 00:09:34.916 17.329 - 17.428: 99.1789% ( 8) 00:09:34.916 17.428 - 17.526: 99.2139% ( 6) 00:09:34.916 17.526 - 17.625: 99.2721% ( 10) 00:09:34.916 17.625 - 17.723: 99.3012% ( 5) 00:09:34.916 17.723 - 17.822: 99.3595% ( 10) 00:09:34.916 17.822 - 17.920: 99.4177% ( 10) 00:09:34.916 17.920 - 18.018: 99.4701% ( 9) 00:09:34.916 18.018 - 18.117: 99.4876% ( 3) 00:09:34.916 18.117 - 18.215: 99.5458% ( 10) 00:09:34.916 18.215 - 18.314: 99.5691% ( 4) 00:09:34.916 18.314 - 18.412: 99.5749% ( 1) 00:09:34.916 18.412 - 18.511: 99.6215% ( 8) 00:09:34.916 18.511 - 18.609: 99.6448% ( 4) 00:09:34.916 18.609 - 18.708: 99.6623% ( 3) 00:09:34.916 18.708 - 18.806: 99.6681% ( 1) 00:09:34.916 18.806 - 18.905: 99.6739% ( 1) 00:09:34.916 18.905 - 19.003: 99.6914% ( 3) 00:09:34.916 19.003 - 19.102: 99.6972% ( 1) 00:09:34.916 19.102 - 19.200: 99.7030% ( 1) 00:09:34.916 19.200 - 19.298: 99.7088% ( 1) 00:09:34.916 19.298 - 19.397: 99.7205% ( 2) 00:09:34.916 19.397 - 19.495: 99.7263% ( 1) 00:09:34.916 19.495 - 19.594: 99.7613% ( 6) 00:09:34.916 19.791 - 19.889: 99.7671% ( 1) 00:09:34.916 19.889 - 19.988: 99.7729% ( 1) 00:09:34.916 19.988 - 20.086: 99.7845% ( 2) 00:09:34.916 20.086 - 20.185: 99.7904% ( 1) 00:09:34.916 20.185 - 20.283: 99.7962% ( 1) 00:09:34.916 20.283 - 20.382: 99.8020% ( 1) 00:09:34.916 20.677 - 20.775: 99.8137% ( 2) 00:09:34.916 20.775 - 20.874: 99.8195% ( 1) 00:09:34.916 20.874 - 20.972: 99.8311% ( 2) 00:09:34.916 20.972 - 21.071: 99.8370% ( 1) 00:09:34.916 21.071 - 21.169: 99.8544% ( 3) 00:09:34.916 21.662 - 21.760: 99.8602% ( 1) 00:09:34.916 23.040 - 23.138: 99.8661% ( 1) 00:09:34.916 23.237 - 23.335: 99.8719% ( 1) 00:09:34.916 24.714 - 24.812: 99.8777% ( 1) 00:09:34.916 24.911 - 25.009: 99.8835% ( 1) 00:09:34.916 25.009 - 25.108: 99.8894% ( 1) 00:09:34.916 26.388 - 26.585: 99.8952% ( 1) 00:09:34.916 26.978 - 27.175: 99.9010% ( 1) 00:09:34.916 29.538 - 29.735: 99.9068% ( 1) 00:09:34.916 31.311 - 31.508: 99.9127% ( 1) 00:09:34.916 31.705 - 31.902: 99.9185% ( 1) 00:09:34.916 32.098 - 32.295: 99.9243% ( 1) 00:09:34.916 33.280 - 33.477: 99.9301% ( 1) 00:09:34.916 36.234 - 36.431: 99.9359% ( 1) 00:09:34.916 36.628 - 36.825: 99.9418% ( 1) 00:09:34.916 36.825 - 37.022: 99.9476% ( 1) 00:09:34.916 40.763 - 40.960: 99.9534% ( 1) 00:09:34.916 42.732 - 42.929: 99.9592% ( 1) 00:09:34.916 45.095 - 45.292: 99.9651% ( 1) 00:09:34.916 46.277 - 46.474: 99.9709% ( 1) 00:09:34.916 47.065 - 47.262: 99.9767% ( 1) 00:09:34.916 49.428 - 49.625: 99.9825% ( 1) 00:09:34.916 53.957 - 54.351: 99.9884% ( 1) 00:09:34.916 86.252 - 86.646: 99.9942% ( 1) 00:09:34.916 253.637 - 255.212: 100.0000% ( 1) 00:09:34.916 00:09:34.916 Complete histogram 00:09:34.916 ================== 00:09:34.916 Range in us Cumulative Count 00:09:34.916 7.188 - 7.237: 0.1689% ( 29) 00:09:34.916 7.237 - 7.286: 3.6394% ( 596) 00:09:34.916 7.286 - 7.335: 17.9002% ( 2449) 00:09:34.916 7.335 - 7.385: 41.8389% ( 4111) 00:09:34.916 7.385 - 7.434: 63.5474% ( 3728) 00:09:34.916 7.434 - 7.483: 77.5753% ( 2409) 00:09:34.916 7.483 - 7.532: 86.3798% ( 1512) 00:09:34.916 7.532 - 7.582: 91.1780% ( 824) 00:09:34.916 7.582 - 7.631: 93.8858% ( 465) 00:09:34.916 7.631 - 7.680: 95.1435% ( 216) 00:09:34.916 7.680 - 7.729: 95.8132% ( 115) 00:09:34.916 7.729 - 7.778: 96.1451% ( 57) 00:09:34.916 7.778 - 7.828: 96.2790% ( 23) 00:09:34.916 7.828 - 7.877: 96.3489% ( 12) 00:09:34.916 7.877 - 7.926: 96.3955% ( 8) 00:09:34.916 7.926 - 7.975: 96.4596% ( 11) 00:09:34.916 7.975 - 8.025: 96.5061% ( 8) 00:09:34.916 8.025 - 8.074: 96.5586% ( 9) 00:09:34.916 8.074 - 8.123: 96.6867% ( 22) 00:09:34.916 8.123 - 8.172: 96.9079% ( 38) 00:09:34.916 8.172 - 8.222: 97.1409% ( 40) 00:09:34.916 8.222 - 8.271: 97.4145% ( 47) 00:09:34.916 8.271 - 8.320: 97.6591% ( 42) 00:09:34.916 8.320 - 8.369: 97.7930% ( 23) 00:09:34.916 8.369 - 8.418: 97.8862% ( 16) 00:09:34.916 8.418 - 8.468: 97.9444% ( 10) 00:09:34.916 8.468 - 8.517: 97.9677% ( 4) 00:09:34.916 8.517 - 8.566: 97.9910% ( 4) 00:09:34.916 8.566 - 8.615: 98.0201% ( 5) 00:09:34.916 8.615 - 8.665: 98.0376% ( 3) 00:09:34.916 8.665 - 8.714: 98.0551% ( 3) 00:09:34.916 8.812 - 8.862: 98.0667% ( 2) 00:09:34.916 9.058 - 9.108: 98.0726% ( 1) 00:09:34.916 9.108 - 9.157: 98.0784% ( 1) 00:09:34.916 9.157 - 9.206: 98.0842% ( 1) 00:09:34.916 9.206 - 9.255: 98.0900% ( 1) 00:09:34.916 9.305 - 9.354: 98.1017% ( 2) 00:09:34.916 9.403 - 9.452: 98.1075% ( 1) 00:09:34.916 9.551 - 9.600: 98.1191% ( 2) 00:09:34.916 9.600 - 9.649: 98.1308% ( 2) 00:09:34.916 9.698 - 9.748: 98.1424% ( 2) 00:09:34.916 9.748 - 9.797: 98.1483% ( 1) 00:09:34.916 9.797 - 9.846: 98.1599% ( 2) 00:09:34.916 9.846 - 9.895: 98.1715% ( 2) 00:09:34.916 9.895 - 9.945: 98.1890% ( 3) 00:09:34.916 9.994 - 10.043: 98.2065% ( 3) 00:09:34.916 10.043 - 10.092: 98.2240% ( 3) 00:09:34.916 10.092 - 10.142: 98.2414% ( 3) 00:09:34.916 10.142 - 10.191: 98.2531% ( 2) 00:09:34.916 10.289 - 10.338: 98.2589% ( 1) 00:09:34.916 10.388 - 10.437: 98.2647% ( 1) 00:09:34.916 10.486 - 10.535: 98.2705% ( 1) 00:09:34.916 10.535 - 10.585: 98.2822% ( 2) 00:09:34.916 10.634 - 10.683: 98.2880% ( 1) 00:09:34.916 10.683 - 10.732: 98.3055% ( 3) 00:09:34.916 10.782 - 10.831: 98.3113% ( 1) 00:09:34.916 10.880 - 10.929: 98.3229% ( 2) 00:09:34.916 10.978 - 11.028: 98.3288% ( 1) 00:09:34.916 11.077 - 11.126: 98.3404% ( 2) 00:09:34.916 11.126 - 11.175: 98.3462% ( 1) 00:09:34.916 11.323 - 11.372: 98.3521% ( 1) 00:09:34.916 11.668 - 11.717: 98.3579% ( 1) 00:09:34.916 11.717 - 11.766: 98.3637% ( 1) 00:09:34.916 11.766 - 11.815: 98.3695% ( 1) 00:09:34.916 11.815 - 11.865: 98.3870% ( 3) 00:09:34.916 11.914 - 11.963: 98.3928% ( 1) 00:09:34.916 12.111 - 12.160: 98.3986% ( 1) 00:09:34.916 12.308 - 12.357: 98.4045% ( 1) 00:09:34.916 12.603 - 12.702: 98.4103% ( 1) 00:09:34.916 12.702 - 12.800: 98.4161% ( 1) 00:09:34.916 12.800 - 12.898: 98.4336% ( 3) 00:09:34.916 12.898 - 12.997: 98.5500% ( 20) 00:09:34.916 12.997 - 13.095: 98.6141% ( 11) 00:09:34.916 13.095 - 13.194: 98.7014% ( 15) 00:09:34.916 13.194 - 13.292: 98.8004% ( 17) 00:09:34.916 13.292 - 13.391: 98.8703% ( 12) 00:09:34.916 13.391 - 13.489: 98.9926% ( 21) 00:09:34.917 13.489 - 13.588: 99.0858% ( 16) 00:09:34.917 13.588 - 13.686: 99.1848% ( 17) 00:09:34.917 13.686 - 13.785: 99.2314% ( 8) 00:09:34.917 13.785 - 13.883: 99.3012% ( 12) 00:09:34.917 13.883 - 13.982: 99.3886% ( 15) 00:09:34.917 13.982 - 14.080: 99.4526% ( 11) 00:09:34.917 14.080 - 14.178: 99.4934% ( 7) 00:09:34.917 14.178 - 14.277: 99.5400% ( 8) 00:09:34.917 14.277 - 14.375: 99.5982% ( 10) 00:09:34.917 14.375 - 14.474: 99.6623% ( 11) 00:09:34.917 14.474 - 14.572: 99.6972% ( 6) 00:09:34.917 14.572 - 14.671: 99.7205% ( 4) 00:09:34.917 14.769 - 14.868: 99.7496% ( 5) 00:09:34.917 14.868 - 14.966: 99.7613% ( 2) 00:09:34.917 14.966 - 15.065: 99.7787% ( 3) 00:09:34.917 15.065 - 15.163: 99.8078% ( 5) 00:09:34.917 15.163 - 15.262: 99.8137% ( 1) 00:09:34.917 15.262 - 15.360: 99.8195% ( 1) 00:09:34.917 15.360 - 15.458: 99.8253% ( 1) 00:09:34.917 15.655 - 15.754: 99.8370% ( 2) 00:09:34.917 15.754 - 15.852: 99.8486% ( 2) 00:09:34.917 15.852 - 15.951: 99.8544% ( 1) 00:09:34.917 16.148 - 16.246: 99.8602% ( 1) 00:09:34.917 16.246 - 16.345: 99.8719% ( 2) 00:09:34.917 16.443 - 16.542: 99.8777% ( 1) 00:09:34.917 16.542 - 16.640: 99.8835% ( 1) 00:09:34.917 16.837 - 16.935: 99.8894% ( 1) 00:09:34.917 16.935 - 17.034: 99.9010% ( 2) 00:09:34.917 17.132 - 17.231: 99.9068% ( 1) 00:09:34.917 18.412 - 18.511: 99.9127% ( 1) 00:09:34.917 18.511 - 18.609: 99.9243% ( 2) 00:09:34.917 18.905 - 19.003: 99.9301% ( 1) 00:09:34.917 19.889 - 19.988: 99.9359% ( 1) 00:09:34.917 19.988 - 20.086: 99.9418% ( 1) 00:09:34.917 20.086 - 20.185: 99.9476% ( 1) 00:09:34.917 22.351 - 22.449: 99.9534% ( 1) 00:09:34.917 22.646 - 22.745: 99.9592% ( 1) 00:09:34.917 23.631 - 23.729: 99.9709% ( 2) 00:09:34.917 25.403 - 25.600: 99.9767% ( 1) 00:09:34.917 26.585 - 26.782: 99.9825% ( 1) 00:09:34.917 48.246 - 48.443: 99.9884% ( 1) 00:09:34.917 212.677 - 214.252: 99.9942% ( 1) 00:09:34.917 281.994 - 283.569: 100.0000% ( 1) 00:09:34.917 00:09:34.917 00:09:34.917 real 0m1.218s 00:09:34.917 user 0m1.077s 00:09:34.917 sys 0m0.088s 00:09:34.917 23:43:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.917 23:43:05 -- common/autotest_common.sh@10 -- # set +x 00:09:34.917 ************************************ 00:09:34.917 END TEST nvme_overhead 00:09:34.917 ************************************ 00:09:34.917 23:43:05 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:34.917 23:43:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:34.917 23:43:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.917 23:43:05 -- common/autotest_common.sh@10 -- # set +x 00:09:34.917 ************************************ 00:09:34.917 START TEST nvme_arbitration 00:09:34.917 ************************************ 00:09:34.917 23:43:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:38.216 Initializing NVMe Controllers 00:09:38.216 Attached to 0000:00:06.0 00:09:38.216 Attached to 0000:00:07.0 00:09:38.216 Attached to 0000:00:09.0 00:09:38.216 Attached to 0000:00:08.0 00:09:38.216 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:38.216 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:38.216 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:38.216 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:38.216 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:38.216 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:38.216 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:38.216 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:38.216 Initialization complete. Launching workers. 00:09:38.216 Starting thread on core 1 with urgent priority queue 00:09:38.216 Starting thread on core 2 with urgent priority queue 00:09:38.216 Starting thread on core 3 with urgent priority queue 00:09:38.216 Starting thread on core 0 with urgent priority queue 00:09:38.216 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:09:38.216 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:09:38.216 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:38.216 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:38.216 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:09:38.216 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:09:38.216 ======================================================== 00:09:38.216 00:09:38.216 00:09:38.216 real 0m3.392s 00:09:38.216 user 0m9.508s 00:09:38.216 sys 0m0.104s 00:09:38.216 23:43:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.216 23:43:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.216 ************************************ 00:09:38.216 END TEST nvme_arbitration 00:09:38.216 ************************************ 00:09:38.216 23:43:08 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:38.216 23:43:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:38.216 23:43:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.216 23:43:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.216 ************************************ 00:09:38.216 START TEST nvme_single_aen 00:09:38.216 ************************************ 00:09:38.216 23:43:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:38.216 [2024-12-13 23:43:08.832314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:38.216 [2024-12-13 23:43:08.832371] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.478 [2024-12-13 23:43:08.966006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:38.478 [2024-12-13 23:43:08.967350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:38.478 [2024-12-13 23:43:08.968028] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:38.478 [2024-12-13 23:43:08.968699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:38.478 Asynchronous Event Request test 00:09:38.478 Attached to 0000:00:06.0 00:09:38.478 Attached to 0000:00:07.0 00:09:38.478 Attached to 0000:00:09.0 00:09:38.478 Attached to 0000:00:08.0 00:09:38.478 Reset controller to setup AER completions for this process 00:09:38.478 Registering asynchronous event callbacks... 00:09:38.478 Getting orig temperature thresholds of all controllers 00:09:38.478 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.478 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.478 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.478 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.478 Setting all controllers temperature threshold low to trigger AER 00:09:38.478 Waiting for all controllers temperature threshold to be set lower 00:09:38.478 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.478 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:38.478 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.478 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:38.478 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.478 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:38.478 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.478 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:38.478 Waiting for all controllers to trigger AER and reset threshold 00:09:38.478 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.478 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.478 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.478 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.478 Cleaning up... 00:09:38.478 00:09:38.478 real 0m0.197s 00:09:38.478 user 0m0.052s 00:09:38.478 sys 0m0.097s 00:09:38.478 23:43:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.478 ************************************ 00:09:38.478 END TEST nvme_single_aen 00:09:38.478 23:43:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.478 ************************************ 00:09:38.478 23:43:09 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:38.478 23:43:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.478 23:43:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.478 23:43:09 -- common/autotest_common.sh@10 -- # set +x 00:09:38.478 ************************************ 00:09:38.478 START TEST nvme_doorbell_aers 00:09:38.478 ************************************ 00:09:38.478 23:43:09 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:38.478 23:43:09 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:38.478 23:43:09 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:38.478 23:43:09 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.478 23:43:09 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:38.478 23:43:09 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:38.478 23:43:09 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:38.478 23:43:09 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.478 23:43:09 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.478 23:43:09 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:38.478 23:43:09 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:38.478 23:43:09 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:38.478 23:43:09 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:38.478 23:43:09 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:38.738 [2024-12-13 23:43:09.288461] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:09:48.764 Executing: test_write_invalid_db 00:09:48.764 Waiting for AER completion... 00:09:48.764 Failure: test_write_invalid_db 00:09:48.764 00:09:48.764 Executing: test_invalid_db_write_overflow_sq 00:09:48.764 Waiting for AER completion... 00:09:48.764 Failure: test_invalid_db_write_overflow_sq 00:09:48.764 00:09:48.764 Executing: test_invalid_db_write_overflow_cq 00:09:48.764 Waiting for AER completion... 00:09:48.764 Failure: test_invalid_db_write_overflow_cq 00:09:48.764 00:09:48.764 23:43:19 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:48.764 23:43:19 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:48.764 [2024-12-13 23:43:19.332732] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:09:58.736 Executing: test_write_invalid_db 00:09:58.736 Waiting for AER completion... 00:09:58.736 Failure: test_write_invalid_db 00:09:58.736 00:09:58.736 Executing: test_invalid_db_write_overflow_sq 00:09:58.736 Waiting for AER completion... 00:09:58.736 Failure: test_invalid_db_write_overflow_sq 00:09:58.736 00:09:58.736 Executing: test_invalid_db_write_overflow_cq 00:09:58.736 Waiting for AER completion... 00:09:58.736 Failure: test_invalid_db_write_overflow_cq 00:09:58.736 00:09:58.736 23:43:29 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:58.736 23:43:29 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:58.736 [2024-12-13 23:43:29.346293] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:08.709 Executing: test_write_invalid_db 00:10:08.709 Waiting for AER completion... 00:10:08.709 Failure: test_write_invalid_db 00:10:08.709 00:10:08.709 Executing: test_invalid_db_write_overflow_sq 00:10:08.709 Waiting for AER completion... 00:10:08.709 Failure: test_invalid_db_write_overflow_sq 00:10:08.709 00:10:08.709 Executing: test_invalid_db_write_overflow_cq 00:10:08.709 Waiting for AER completion... 00:10:08.709 Failure: test_invalid_db_write_overflow_cq 00:10:08.709 00:10:08.709 23:43:39 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:08.709 23:43:39 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:08.709 [2024-12-13 23:43:39.395297] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.679 Executing: test_write_invalid_db 00:10:18.679 Waiting for AER completion... 00:10:18.679 Failure: test_write_invalid_db 00:10:18.679 00:10:18.679 Executing: test_invalid_db_write_overflow_sq 00:10:18.679 Waiting for AER completion... 00:10:18.679 Failure: test_invalid_db_write_overflow_sq 00:10:18.679 00:10:18.679 Executing: test_invalid_db_write_overflow_cq 00:10:18.679 Waiting for AER completion... 00:10:18.679 Failure: test_invalid_db_write_overflow_cq 00:10:18.679 00:10:18.679 00:10:18.679 real 0m40.172s 00:10:18.679 user 0m34.178s 00:10:18.679 sys 0m5.633s 00:10:18.679 23:43:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.679 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:10:18.679 ************************************ 00:10:18.679 END TEST nvme_doorbell_aers 00:10:18.679 ************************************ 00:10:18.679 23:43:49 -- nvme/nvme.sh@97 -- # uname 00:10:18.679 23:43:49 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:18.679 23:43:49 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:18.679 23:43:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:18.679 23:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.679 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:10:18.679 ************************************ 00:10:18.679 START TEST nvme_multi_aen 00:10:18.679 ************************************ 00:10:18.679 23:43:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:18.679 [2024-12-13 23:43:49.304981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.679 [2024-12-13 23:43:49.305044] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.938 [2024-12-13 23:43:49.438267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:18.938 [2024-12-13 23:43:49.438306] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.438688] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.438744] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.439891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:18.938 [2024-12-13 23:43:49.439914] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.439976] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.440008] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.440840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:18.938 [2024-12-13 23:43:49.440859] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.440907] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.440939] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.441754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:18.938 [2024-12-13 23:43:49.441773] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.441821] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.441852] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63855) is not found. Dropping the request. 00:10:18.938 [2024-12-13 23:43:49.446628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.938 Child process pid: 64375 00:10:18.938 [2024-12-13 23:43:49.446704] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.938 [Child] Asynchronous Event Request test 00:10:18.938 [Child] Attached to 0000:00:06.0 00:10:18.938 [Child] Attached to 0000:00:07.0 00:10:18.938 [Child] Attached to 0000:00:09.0 00:10:18.938 [Child] Attached to 0000:00:08.0 00:10:18.938 [Child] Registering asynchronous event callbacks... 00:10:18.938 [Child] Getting orig temperature thresholds of all controllers 00:10:18.938 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.938 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.938 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.938 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.938 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:18.938 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.938 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.938 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.938 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.938 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.938 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.938 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.938 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.938 [Child] Cleaning up... 00:10:19.196 Asynchronous Event Request test 00:10:19.196 Attached to 0000:00:06.0 00:10:19.196 Attached to 0000:00:07.0 00:10:19.196 Attached to 0000:00:09.0 00:10:19.196 Attached to 0000:00:08.0 00:10:19.196 Reset controller to setup AER completions for this process 00:10:19.196 Registering asynchronous event callbacks... 00:10:19.196 Getting orig temperature thresholds of all controllers 00:10:19.196 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.196 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.196 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.196 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.196 Setting all controllers temperature threshold low to trigger AER 00:10:19.196 Waiting for all controllers temperature threshold to be set lower 00:10:19.196 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.196 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:19.196 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.196 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:19.196 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.196 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:19.196 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.196 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:19.196 Waiting for all controllers to trigger AER and reset threshold 00:10:19.196 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.196 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.196 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.196 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.196 Cleaning up... 00:10:19.196 00:10:19.196 real 0m0.412s 00:10:19.196 user 0m0.122s 00:10:19.196 sys 0m0.170s 00:10:19.196 23:43:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:19.196 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.196 ************************************ 00:10:19.196 END TEST nvme_multi_aen 00:10:19.196 ************************************ 00:10:19.196 23:43:49 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:19.196 23:43:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:19.196 23:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.196 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.196 ************************************ 00:10:19.196 START TEST nvme_startup 00:10:19.196 ************************************ 00:10:19.196 23:43:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:19.196 Initializing NVMe Controllers 00:10:19.196 Attached to 0000:00:06.0 00:10:19.196 Attached to 0000:00:07.0 00:10:19.196 Attached to 0000:00:09.0 00:10:19.196 Attached to 0000:00:08.0 00:10:19.196 Initialization complete. 00:10:19.196 Time used:137705.938 (us). 00:10:19.196 00:10:19.196 real 0m0.196s 00:10:19.196 user 0m0.067s 00:10:19.196 sys 0m0.087s 00:10:19.196 23:43:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:19.196 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.196 ************************************ 00:10:19.196 END TEST nvme_startup 00:10:19.196 ************************************ 00:10:19.455 23:43:49 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:19.455 23:43:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:19.455 23:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.455 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.455 ************************************ 00:10:19.455 START TEST nvme_multi_secondary 00:10:19.455 ************************************ 00:10:19.455 23:43:49 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:19.455 23:43:49 -- nvme/nvme.sh@52 -- # pid0=64427 00:10:19.455 23:43:49 -- nvme/nvme.sh@54 -- # pid1=64428 00:10:19.455 23:43:49 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:19.455 23:43:49 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:19.455 23:43:49 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:22.737 Initializing NVMe Controllers 00:10:22.737 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:22.737 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:22.737 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:22.737 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:22.737 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:22.737 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:22.737 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:22.737 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:22.737 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:22.737 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:22.737 Initialization complete. Launching workers. 00:10:22.737 ======================================================== 00:10:22.737 Latency(us) 00:10:22.737 Device Information : IOPS MiB/s Average min max 00:10:22.737 PCIE (0000:00:06.0) NSID 1 from core 1: 7788.40 30.42 2053.03 844.14 7411.55 00:10:22.737 PCIE (0000:00:07.0) NSID 1 from core 1: 7788.40 30.42 2054.01 860.65 7019.33 00:10:22.737 PCIE (0000:00:09.0) NSID 1 from core 1: 7788.40 30.42 2053.98 840.55 7385.28 00:10:22.737 PCIE (0000:00:08.0) NSID 1 from core 1: 7788.40 30.42 2053.95 867.65 7388.57 00:10:22.737 PCIE (0000:00:08.0) NSID 2 from core 1: 7788.40 30.42 2053.93 869.29 7532.50 00:10:22.737 PCIE (0000:00:08.0) NSID 3 from core 1: 7788.40 30.42 2053.91 856.64 7354.23 00:10:22.737 ======================================================== 00:10:22.737 Total : 46730.41 182.54 2053.80 840.55 7532.50 00:10:22.737 00:10:22.996 Initializing NVMe Controllers 00:10:22.996 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:22.996 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:22.996 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:22.996 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:22.996 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:22.996 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:22.996 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:22.996 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:22.996 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:22.996 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:22.996 Initialization complete. Launching workers. 00:10:22.996 ======================================================== 00:10:22.996 Latency(us) 00:10:22.996 Device Information : IOPS MiB/s Average min max 00:10:22.996 PCIE (0000:00:06.0) NSID 1 from core 2: 3247.87 12.69 4924.26 962.22 12953.81 00:10:22.996 PCIE (0000:00:07.0) NSID 1 from core 2: 3247.87 12.69 4926.04 1068.32 17032.35 00:10:22.996 PCIE (0000:00:09.0) NSID 1 from core 2: 3247.87 12.69 4925.95 1063.80 17034.51 00:10:22.996 PCIE (0000:00:08.0) NSID 1 from core 2: 3247.87 12.69 4932.06 1060.66 12109.90 00:10:22.996 PCIE (0000:00:08.0) NSID 2 from core 2: 3247.87 12.69 4932.38 1052.55 13686.17 00:10:22.996 PCIE (0000:00:08.0) NSID 3 from core 2: 3247.87 12.69 4932.51 1044.54 13119.61 00:10:22.996 ======================================================== 00:10:22.996 Total : 19487.23 76.12 4928.86 962.22 17034.51 00:10:22.996 00:10:22.996 23:43:53 -- nvme/nvme.sh@56 -- # wait 64427 00:10:24.894 Initializing NVMe Controllers 00:10:24.894 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:24.894 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:24.894 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:24.894 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:24.894 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:24.894 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:24.894 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:24.894 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:24.894 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:24.894 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:24.894 Initialization complete. Launching workers. 00:10:24.894 ======================================================== 00:10:24.895 Latency(us) 00:10:24.895 Device Information : IOPS MiB/s Average min max 00:10:24.895 PCIE (0000:00:06.0) NSID 1 from core 0: 11007.76 43.00 1452.31 712.87 5699.84 00:10:24.895 PCIE (0000:00:07.0) NSID 1 from core 0: 11007.76 43.00 1453.12 733.30 5909.61 00:10:24.895 PCIE (0000:00:09.0) NSID 1 from core 0: 11007.76 43.00 1453.10 664.87 6550.67 00:10:24.895 PCIE (0000:00:08.0) NSID 1 from core 0: 11007.76 43.00 1453.08 654.86 7346.63 00:10:24.895 PCIE (0000:00:08.0) NSID 2 from core 0: 11007.76 43.00 1453.06 647.28 6842.31 00:10:24.895 PCIE (0000:00:08.0) NSID 3 from core 0: 11007.76 43.00 1453.04 621.46 6322.02 00:10:24.895 ======================================================== 00:10:24.895 Total : 66046.59 257.99 1452.95 621.46 7346.63 00:10:24.895 00:10:24.895 23:43:55 -- nvme/nvme.sh@57 -- # wait 64428 00:10:24.895 23:43:55 -- nvme/nvme.sh@61 -- # pid0=64497 00:10:24.895 23:43:55 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:24.895 23:43:55 -- nvme/nvme.sh@63 -- # pid1=64498 00:10:24.895 23:43:55 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:24.895 23:43:55 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:28.176 Initializing NVMe Controllers 00:10:28.176 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:28.176 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:28.176 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:28.176 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:28.176 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:28.176 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:28.176 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:28.176 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:28.176 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:28.176 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:28.176 Initialization complete. Launching workers. 00:10:28.176 ======================================================== 00:10:28.176 Latency(us) 00:10:28.176 Device Information : IOPS MiB/s Average min max 00:10:28.176 PCIE (0000:00:06.0) NSID 1 from core 0: 7982.65 31.18 2003.03 744.99 4990.56 00:10:28.176 PCIE (0000:00:07.0) NSID 1 from core 0: 7982.65 31.18 2004.05 749.22 5598.24 00:10:28.176 PCIE (0000:00:09.0) NSID 1 from core 0: 7982.65 31.18 2004.02 752.49 5874.31 00:10:28.176 PCIE (0000:00:08.0) NSID 1 from core 0: 7982.65 31.18 2004.02 756.11 6028.12 00:10:28.176 PCIE (0000:00:08.0) NSID 2 from core 0: 7982.65 31.18 2004.05 746.80 5719.73 00:10:28.176 PCIE (0000:00:08.0) NSID 3 from core 0: 7982.65 31.18 2004.07 758.27 5491.90 00:10:28.176 ======================================================== 00:10:28.176 Total : 47895.87 187.09 2003.87 744.99 6028.12 00:10:28.176 00:10:28.176 Initializing NVMe Controllers 00:10:28.176 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:28.176 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:28.176 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:28.176 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:28.176 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:28.176 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:28.176 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:28.176 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:28.176 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:28.176 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:28.176 Initialization complete. Launching workers. 00:10:28.176 ======================================================== 00:10:28.176 Latency(us) 00:10:28.176 Device Information : IOPS MiB/s Average min max 00:10:28.176 PCIE (0000:00:06.0) NSID 1 from core 1: 7934.87 31.00 2015.06 720.82 9188.23 00:10:28.176 PCIE (0000:00:07.0) NSID 1 from core 1: 7934.87 31.00 2016.04 731.49 9348.30 00:10:28.176 PCIE (0000:00:09.0) NSID 1 from core 1: 7934.87 31.00 2016.00 732.33 9793.47 00:10:28.176 PCIE (0000:00:08.0) NSID 1 from core 1: 7934.87 31.00 2015.94 750.35 11610.02 00:10:28.176 PCIE (0000:00:08.0) NSID 2 from core 1: 7934.87 31.00 2015.89 753.34 12002.20 00:10:28.176 PCIE (0000:00:08.0) NSID 3 from core 1: 7934.87 31.00 2015.85 737.46 12123.84 00:10:28.176 ======================================================== 00:10:28.176 Total : 47609.22 185.97 2015.80 720.82 12123.84 00:10:28.176 00:10:30.706 Initializing NVMe Controllers 00:10:30.706 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:30.706 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:30.706 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:30.706 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:30.706 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:30.706 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:30.706 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:30.706 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:30.706 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:30.706 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:30.706 Initialization complete. Launching workers. 00:10:30.706 ======================================================== 00:10:30.706 Latency(us) 00:10:30.706 Device Information : IOPS MiB/s Average min max 00:10:30.706 PCIE (0000:00:06.0) NSID 1 from core 2: 4761.49 18.60 3358.99 851.63 12600.79 00:10:30.706 PCIE (0000:00:07.0) NSID 1 from core 2: 4761.49 18.60 3359.77 780.18 12448.44 00:10:30.706 PCIE (0000:00:09.0) NSID 1 from core 2: 4761.49 18.60 3359.55 865.03 13213.00 00:10:30.706 PCIE (0000:00:08.0) NSID 1 from core 2: 4761.49 18.60 3358.87 860.99 12594.94 00:10:30.706 PCIE (0000:00:08.0) NSID 2 from core 2: 4761.49 18.60 3357.08 857.67 12203.72 00:10:30.706 PCIE (0000:00:08.0) NSID 3 from core 2: 4761.49 18.60 3357.04 796.80 12488.57 00:10:30.706 ======================================================== 00:10:30.706 Total : 28568.91 111.60 3358.55 780.18 13213.00 00:10:30.706 00:10:30.706 23:44:00 -- nvme/nvme.sh@65 -- # wait 64497 00:10:30.706 23:44:00 -- nvme/nvme.sh@66 -- # wait 64498 00:10:30.706 00:10:30.706 real 0m10.942s 00:10:30.706 user 0m18.654s 00:10:30.706 sys 0m0.628s 00:10:30.706 23:44:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:30.706 ************************************ 00:10:30.706 23:44:00 -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 END TEST nvme_multi_secondary 00:10:30.706 ************************************ 00:10:30.706 23:44:00 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:30.706 23:44:00 -- nvme/nvme.sh@102 -- # kill_stub 00:10:30.706 23:44:00 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63443 ]] 00:10:30.706 23:44:00 -- common/autotest_common.sh@1076 -- # kill 63443 00:10:30.706 23:44:00 -- common/autotest_common.sh@1077 -- # wait 63443 00:10:30.968 [2024-12-13 23:44:01.649813] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:30.968 [2024-12-13 23:44:01.649862] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:30.968 [2024-12-13 23:44:01.649873] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:30.968 [2024-12-13 23:44:01.649884] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:31.540 [2024-12-13 23:44:02.197458] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:31.540 [2024-12-13 23:44:02.197515] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:31.540 [2024-12-13 23:44:02.197526] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:31.540 [2024-12-13 23:44:02.197537] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:32.483 [2024-12-13 23:44:03.168143] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:32.483 [2024-12-13 23:44:03.168200] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:32.483 [2024-12-13 23:44:03.168214] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:32.483 [2024-12-13 23:44:03.168226] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:34.399 [2024-12-13 23:44:04.675614] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:34.399 [2024-12-13 23:44:04.675684] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:34.399 [2024-12-13 23:44:04.675698] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:34.399 [2024-12-13 23:44:04.675716] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64370) is not found. Dropping the request. 00:10:34.399 23:44:04 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:34.399 23:44:04 -- common/autotest_common.sh@1083 -- # echo 2 00:10:34.399 23:44:04 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:34.399 23:44:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:34.399 23:44:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:34.399 23:44:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.399 ************************************ 00:10:34.399 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:34.399 ************************************ 00:10:34.399 23:44:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:34.399 * Looking for test storage... 00:10:34.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:34.399 23:44:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:34.399 23:44:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:34.399 23:44:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:34.399 23:44:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:34.399 23:44:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:34.399 23:44:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:34.399 23:44:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:34.399 23:44:05 -- scripts/common.sh@335 -- # IFS=.-: 00:10:34.399 23:44:05 -- scripts/common.sh@335 -- # read -ra ver1 00:10:34.399 23:44:05 -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.399 23:44:05 -- scripts/common.sh@336 -- # read -ra ver2 00:10:34.399 23:44:05 -- scripts/common.sh@337 -- # local 'op=<' 00:10:34.399 23:44:05 -- scripts/common.sh@339 -- # ver1_l=2 00:10:34.399 23:44:05 -- scripts/common.sh@340 -- # ver2_l=1 00:10:34.399 23:44:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:34.399 23:44:05 -- scripts/common.sh@343 -- # case "$op" in 00:10:34.399 23:44:05 -- scripts/common.sh@344 -- # : 1 00:10:34.399 23:44:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:34.399 23:44:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.399 23:44:05 -- scripts/common.sh@364 -- # decimal 1 00:10:34.399 23:44:05 -- scripts/common.sh@352 -- # local d=1 00:10:34.399 23:44:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.399 23:44:05 -- scripts/common.sh@354 -- # echo 1 00:10:34.399 23:44:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:34.399 23:44:05 -- scripts/common.sh@365 -- # decimal 2 00:10:34.399 23:44:05 -- scripts/common.sh@352 -- # local d=2 00:10:34.399 23:44:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.399 23:44:05 -- scripts/common.sh@354 -- # echo 2 00:10:34.399 23:44:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:34.399 23:44:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:34.399 23:44:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:34.399 23:44:05 -- scripts/common.sh@367 -- # return 0 00:10:34.399 23:44:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.399 23:44:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.399 --rc genhtml_branch_coverage=1 00:10:34.399 --rc genhtml_function_coverage=1 00:10:34.399 --rc genhtml_legend=1 00:10:34.399 --rc geninfo_all_blocks=1 00:10:34.399 --rc geninfo_unexecuted_blocks=1 00:10:34.399 00:10:34.399 ' 00:10:34.399 23:44:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.399 --rc genhtml_branch_coverage=1 00:10:34.399 --rc genhtml_function_coverage=1 00:10:34.399 --rc genhtml_legend=1 00:10:34.399 --rc geninfo_all_blocks=1 00:10:34.399 --rc geninfo_unexecuted_blocks=1 00:10:34.399 00:10:34.399 ' 00:10:34.399 23:44:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.399 --rc genhtml_branch_coverage=1 00:10:34.399 --rc genhtml_function_coverage=1 00:10:34.399 --rc genhtml_legend=1 00:10:34.399 --rc geninfo_all_blocks=1 00:10:34.399 --rc geninfo_unexecuted_blocks=1 00:10:34.399 00:10:34.399 ' 00:10:34.399 23:44:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:34.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.399 --rc genhtml_branch_coverage=1 00:10:34.399 --rc genhtml_function_coverage=1 00:10:34.399 --rc genhtml_legend=1 00:10:34.399 --rc geninfo_all_blocks=1 00:10:34.399 --rc geninfo_unexecuted_blocks=1 00:10:34.399 00:10:34.399 ' 00:10:34.399 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:34.399 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:34.399 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:34.399 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:34.399 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:34.399 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:34.399 23:44:05 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:34.399 23:44:05 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:34.399 23:44:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:34.399 23:44:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:34.399 23:44:05 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:34.399 23:44:05 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:34.399 23:44:05 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:34.399 23:44:05 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:34.399 23:44:05 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:34.399 23:44:05 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:34.660 23:44:05 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:34.660 23:44:05 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:34.660 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:34.660 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:34.660 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64699 00:10:34.660 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:34.660 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64699 00:10:34.660 23:44:05 -- common/autotest_common.sh@829 -- # '[' -z 64699 ']' 00:10:34.660 23:44:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.660 23:44:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.660 23:44:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.660 23:44:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.660 23:44:05 -- common/autotest_common.sh@10 -- # set +x 00:10:34.660 23:44:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:34.660 [2024-12-13 23:44:05.189555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:34.660 [2024-12-13 23:44:05.189645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64699 ] 00:10:34.660 [2024-12-13 23:44:05.339406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.918 [2024-12-13 23:44:05.539255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:34.918 [2024-12-13 23:44:05.539644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.918 [2024-12-13 23:44:05.540058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.918 [2024-12-13 23:44:05.540352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.918 [2024-12-13 23:44:05.540432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.297 23:44:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.297 23:44:06 -- common/autotest_common.sh@862 -- # return 0 00:10:36.297 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:36.297 23:44:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.297 23:44:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.297 nvme0n1 00:10:36.298 23:44:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_53NN9.txt 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:36.298 23:44:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.298 23:44:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.298 true 00:10:36.298 23:44:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734133446 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64735 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:36.298 23:44:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:38.206 23:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.206 23:44:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.206 [2024-12-13 23:44:08.793975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:38.206 [2024-12-13 23:44:08.794184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:38.206 [2024-12-13 23:44:08.794203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:38.206 [2024-12-13 23:44:08.794213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.206 [2024-12-13 23:44:08.795517] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.206 23:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.206 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64735 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64735 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64735 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:38.206 23:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.206 23:44:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.206 23:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_53NN9.txt 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_53NN9.txt 00:10:38.206 23:44:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64699 00:10:38.206 23:44:08 -- common/autotest_common.sh@936 -- # '[' -z 64699 ']' 00:10:38.206 23:44:08 -- common/autotest_common.sh@940 -- # kill -0 64699 00:10:38.206 23:44:08 -- common/autotest_common.sh@941 -- # uname 00:10:38.206 23:44:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:38.206 23:44:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64699 00:10:38.206 23:44:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:38.206 23:44:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:38.206 killing process with pid 64699 00:10:38.206 23:44:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64699' 00:10:38.206 23:44:08 -- common/autotest_common.sh@955 -- # kill 64699 00:10:38.206 23:44:08 -- common/autotest_common.sh@960 -- # wait 64699 00:10:39.582 23:44:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:39.583 23:44:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:39.583 ************************************ 00:10:39.583 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:39.583 ************************************ 00:10:39.583 00:10:39.583 real 0m5.169s 00:10:39.583 user 0m18.239s 00:10:39.583 sys 0m0.530s 00:10:39.583 23:44:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.583 23:44:10 -- common/autotest_common.sh@10 -- # set +x 00:10:39.583 23:44:10 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:39.583 23:44:10 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:39.583 23:44:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:39.583 23:44:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.583 23:44:10 -- common/autotest_common.sh@10 -- # set +x 00:10:39.583 ************************************ 00:10:39.583 START TEST nvme_fio 00:10:39.583 ************************************ 00:10:39.583 23:44:10 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:39.583 23:44:10 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:39.583 23:44:10 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:39.583 23:44:10 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:39.583 23:44:10 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:39.583 23:44:10 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:39.583 23:44:10 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:39.583 23:44:10 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:39.583 23:44:10 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:39.583 23:44:10 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:39.583 23:44:10 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:39.583 23:44:10 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:39.583 23:44:10 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:39.583 23:44:10 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:39.583 23:44:10 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:39.583 23:44:10 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:39.844 23:44:10 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:39.844 23:44:10 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:40.104 23:44:10 -- nvme/nvme.sh@41 -- # bs=4096 00:10:40.104 23:44:10 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:40.104 23:44:10 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:40.104 23:44:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:40.104 23:44:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:40.104 23:44:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:40.104 23:44:10 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.104 23:44:10 -- common/autotest_common.sh@1330 -- # shift 00:10:40.104 23:44:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:40.104 23:44:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:40.104 23:44:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.104 23:44:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:40.104 23:44:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:40.104 23:44:10 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:40.104 23:44:10 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:40.104 23:44:10 -- common/autotest_common.sh@1336 -- # break 00:10:40.104 23:44:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:40.104 23:44:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:40.365 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:40.365 fio-3.35 00:10:40.365 Starting 1 thread 00:10:46.992 00:10:46.992 test: (groupid=0, jobs=1): err= 0: pid=64872: Fri Dec 13 23:44:16 2024 00:10:46.992 read: IOPS=20.9k, BW=81.6MiB/s (85.6MB/s)(163MiB/2001msec) 00:10:46.992 slat (nsec): min=3328, max=69643, avg=5186.23, stdev=2474.76 00:10:46.992 clat (usec): min=223, max=10220, avg=3042.66, stdev=972.44 00:10:46.992 lat (usec): min=228, max=10224, avg=3047.85, stdev=973.63 00:10:46.992 clat percentiles (usec): 00:10:46.992 | 1.00th=[ 1844], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2442], 00:10:46.992 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:10:46.992 | 70.00th=[ 2999], 80.00th=[ 3359], 90.00th=[ 4424], 95.00th=[ 5276], 00:10:46.992 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7898], 99.95th=[ 8586], 00:10:46.992 | 99.99th=[ 9896] 00:10:46.992 bw ( KiB/s): min=84216, max=86656, per=100.00%, avg=85053.00, stdev=1388.69, samples=3 00:10:46.992 iops : min=21054, max=21664, avg=21263.00, stdev=347.38, samples=3 00:10:46.992 write: IOPS=20.8k, BW=81.2MiB/s (85.2MB/s)(163MiB/2001msec); 0 zone resets 00:10:46.992 slat (nsec): min=3433, max=72006, avg=5336.46, stdev=2544.91 00:10:46.992 clat (usec): min=241, max=9935, avg=3073.48, stdev=983.87 00:10:46.992 lat (usec): min=246, max=9949, avg=3078.82, stdev=985.07 00:10:46.992 clat percentiles (usec): 00:10:46.992 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:10:46.992 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2868], 00:10:46.992 | 70.00th=[ 3032], 80.00th=[ 3425], 90.00th=[ 4490], 95.00th=[ 5342], 00:10:46.992 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8455], 00:10:46.992 | 99.99th=[ 9503] 00:10:46.992 bw ( KiB/s): min=84175, max=86504, per=100.00%, avg=85218.33, stdev=1183.26, samples=3 00:10:46.992 iops : min=21043, max=21626, avg=21304.33, stdev=296.15, samples=3 00:10:46.992 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01% 00:10:46.992 lat (msec) : 2=1.93%, 4=84.78%, 10=13.27%, 20=0.01% 00:10:46.992 cpu : usr=98.95%, sys=0.15%, ctx=13, majf=0, minf=608 00:10:46.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:46.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.992 issued rwts: total=41795,41615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.992 00:10:46.992 Run status group 0 (all jobs): 00:10:46.992 READ: bw=81.6MiB/s (85.6MB/s), 81.6MiB/s-81.6MiB/s (85.6MB/s-85.6MB/s), io=163MiB (171MB), run=2001-2001msec 00:10:46.992 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=163MiB (170MB), run=2001-2001msec 00:10:46.992 ----------------------------------------------------- 00:10:46.992 Suppressions used: 00:10:46.992 count bytes template 00:10:46.992 1 32 /usr/src/fio/parse.c 00:10:46.992 1 8 libtcmalloc_minimal.so 00:10:46.992 ----------------------------------------------------- 00:10:46.992 00:10:46.992 23:44:16 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:46.992 23:44:16 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:46.992 23:44:16 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:46.992 23:44:16 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:46.992 23:44:17 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:46.992 23:44:17 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:46.992 23:44:17 -- nvme/nvme.sh@41 -- # bs=4096 00:10:46.992 23:44:17 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:46.992 23:44:17 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:46.992 23:44:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:46.993 23:44:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:46.993 23:44:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:46.993 23:44:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:46.993 23:44:17 -- common/autotest_common.sh@1330 -- # shift 00:10:46.993 23:44:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:46.993 23:44:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:46.993 23:44:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:46.993 23:44:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:46.993 23:44:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:46.993 23:44:17 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:46.993 23:44:17 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:46.993 23:44:17 -- common/autotest_common.sh@1336 -- # break 00:10:46.993 23:44:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:46.993 23:44:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:46.993 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:46.993 fio-3.35 00:10:46.993 Starting 1 thread 00:10:53.577 00:10:53.577 test: (groupid=0, jobs=1): err= 0: pid=64955: Fri Dec 13 23:44:23 2024 00:10:53.577 read: IOPS=18.3k, BW=71.4MiB/s (74.8MB/s)(143MiB/2001msec) 00:10:53.577 slat (nsec): min=4400, max=73351, avg=5891.20, stdev=2863.81 00:10:53.577 clat (usec): min=223, max=11039, avg=3473.13, stdev=1058.44 00:10:53.577 lat (usec): min=229, max=11044, avg=3479.02, stdev=1059.62 00:10:53.577 clat percentiles (usec): 00:10:53.577 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2769], 00:10:53.577 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3228], 00:10:53.577 | 70.00th=[ 3458], 80.00th=[ 4047], 90.00th=[ 5080], 95.00th=[ 5866], 00:10:53.577 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 9241], 99.95th=[ 9634], 00:10:53.577 | 99.99th=[10814] 00:10:53.577 bw ( KiB/s): min=70432, max=75688, per=100.00%, avg=73418.67, stdev=2700.43, samples=3 00:10:53.577 iops : min=17608, max=18922, avg=18354.67, stdev=675.11, samples=3 00:10:53.577 write: IOPS=18.3k, BW=71.4MiB/s (74.8MB/s)(143MiB/2001msec); 0 zone resets 00:10:53.577 slat (nsec): min=4505, max=71306, avg=6092.91, stdev=2871.00 00:10:53.577 clat (usec): min=233, max=10691, avg=3505.65, stdev=1067.04 00:10:53.577 lat (usec): min=238, max=10696, avg=3511.74, stdev=1068.18 00:10:53.577 clat percentiles (usec): 00:10:53.577 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2802], 00:10:53.577 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3261], 00:10:53.577 | 70.00th=[ 3490], 80.00th=[ 4047], 90.00th=[ 5080], 95.00th=[ 5932], 00:10:53.577 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 9110], 99.95th=[ 9372], 00:10:53.577 | 99.99th=[10159] 00:10:53.577 bw ( KiB/s): min=69896, max=75584, per=100.00%, avg=73266.67, stdev=2986.72, samples=3 00:10:53.577 iops : min=17474, max=18896, avg=18316.67, stdev=746.68, samples=3 00:10:53.577 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:53.577 lat (msec) : 2=0.55%, 4=78.96%, 10=20.43%, 20=0.02% 00:10:53.577 cpu : usr=98.90%, sys=0.10%, ctx=4, majf=0, minf=608 00:10:53.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:53.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.577 issued rwts: total=36550,36555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.577 00:10:53.577 Run status group 0 (all jobs): 00:10:53.577 READ: bw=71.4MiB/s (74.8MB/s), 71.4MiB/s-71.4MiB/s (74.8MB/s-74.8MB/s), io=143MiB (150MB), run=2001-2001msec 00:10:53.577 WRITE: bw=71.4MiB/s (74.8MB/s), 71.4MiB/s-71.4MiB/s (74.8MB/s-74.8MB/s), io=143MiB (150MB), run=2001-2001msec 00:10:53.577 ----------------------------------------------------- 00:10:53.577 Suppressions used: 00:10:53.577 count bytes template 00:10:53.577 1 32 /usr/src/fio/parse.c 00:10:53.577 1 8 libtcmalloc_minimal.so 00:10:53.577 ----------------------------------------------------- 00:10:53.577 00:10:53.577 23:44:23 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:53.577 23:44:23 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:53.577 23:44:23 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:53.577 23:44:23 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:53.577 23:44:23 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:53.577 23:44:23 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:53.577 23:44:23 -- nvme/nvme.sh@41 -- # bs=4096 00:10:53.577 23:44:23 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:53.577 23:44:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:53.577 23:44:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:53.577 23:44:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:53.577 23:44:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:53.577 23:44:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.577 23:44:23 -- common/autotest_common.sh@1330 -- # shift 00:10:53.577 23:44:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:53.577 23:44:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:53.577 23:44:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:53.577 23:44:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.577 23:44:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:53.577 23:44:24 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:53.577 23:44:24 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:53.577 23:44:24 -- common/autotest_common.sh@1336 -- # break 00:10:53.577 23:44:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:53.577 23:44:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:53.577 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:53.577 fio-3.35 00:10:53.577 Starting 1 thread 00:11:00.162 00:11:00.162 test: (groupid=0, jobs=1): err= 0: pid=65054: Fri Dec 13 23:44:29 2024 00:11:00.162 read: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec) 00:11:00.162 slat (nsec): min=4796, max=70953, avg=6297.15, stdev=3156.83 00:11:00.162 clat (usec): min=1261, max=14867, avg=3801.59, stdev=1194.58 00:11:00.162 lat (usec): min=1267, max=14927, avg=3807.88, stdev=1195.76 00:11:00.162 clat percentiles (usec): 00:11:00.162 | 1.00th=[ 2343], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 2999], 00:11:00.162 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3359], 60.00th=[ 3556], 00:11:00.162 | 70.00th=[ 3851], 80.00th=[ 4621], 90.00th=[ 5604], 95.00th=[ 6390], 00:11:00.162 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 9896], 99.95th=[13304], 00:11:00.162 | 99.99th=[14746] 00:11:00.162 bw ( KiB/s): min=63600, max=76648, per=100.00%, avg=68600.00, stdev=7037.78, samples=3 00:11:00.162 iops : min=15900, max=19162, avg=17150.00, stdev=1759.44, samples=3 00:11:00.162 write: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(130MiB/2001msec); 0 zone resets 00:11:00.162 slat (nsec): min=4966, max=80166, avg=6549.43, stdev=3151.53 00:11:00.162 clat (usec): min=1270, max=14783, avg=3843.21, stdev=1208.35 00:11:00.162 lat (usec): min=1276, max=14798, avg=3849.76, stdev=1209.57 00:11:00.162 clat percentiles (usec): 00:11:00.162 | 1.00th=[ 2376], 5.00th=[ 2769], 10.00th=[ 2868], 20.00th=[ 3032], 00:11:00.162 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3392], 60.00th=[ 3589], 00:11:00.162 | 70.00th=[ 3884], 80.00th=[ 4686], 90.00th=[ 5669], 95.00th=[ 6390], 00:11:00.162 | 99.00th=[ 7767], 99.50th=[ 8356], 99.90th=[10814], 99.95th=[13566], 00:11:00.162 | 99.99th=[14615] 00:11:00.162 bw ( KiB/s): min=62856, max=76408, per=100.00%, avg=68416.00, stdev=7095.78, samples=3 00:11:00.162 iops : min=15714, max=19102, avg=17104.00, stdev=1773.95, samples=3 00:11:00.162 lat (msec) : 2=0.53%, 4=72.06%, 10=27.29%, 20=0.11% 00:11:00.162 cpu : usr=98.80%, sys=0.15%, ctx=2, majf=0, minf=608 00:11:00.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:00.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.162 issued rwts: total=33345,33407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.162 00:11:00.162 Run status group 0 (all jobs): 00:11:00.162 READ: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:11:00.162 WRITE: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=130MiB (137MB), run=2001-2001msec 00:11:00.162 ----------------------------------------------------- 00:11:00.162 Suppressions used: 00:11:00.162 count bytes template 00:11:00.162 1 32 /usr/src/fio/parse.c 00:11:00.162 1 8 libtcmalloc_minimal.so 00:11:00.162 ----------------------------------------------------- 00:11:00.162 00:11:00.162 23:44:29 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:00.162 23:44:29 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:00.162 23:44:29 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:00.162 23:44:29 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:00.162 23:44:30 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:00.162 23:44:30 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:00.162 23:44:30 -- nvme/nvme.sh@41 -- # bs=4096 00:11:00.162 23:44:30 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:00.162 23:44:30 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:00.162 23:44:30 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:00.162 23:44:30 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:00.163 23:44:30 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:00.163 23:44:30 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:00.163 23:44:30 -- common/autotest_common.sh@1330 -- # shift 00:11:00.163 23:44:30 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:00.163 23:44:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:00.163 23:44:30 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:00.163 23:44:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:00.163 23:44:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:00.163 23:44:30 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:00.163 23:44:30 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:00.163 23:44:30 -- common/autotest_common.sh@1336 -- # break 00:11:00.163 23:44:30 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:00.163 23:44:30 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:00.163 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:00.163 fio-3.35 00:11:00.163 Starting 1 thread 00:11:08.293 00:11:08.293 test: (groupid=0, jobs=1): err= 0: pid=65141: Fri Dec 13 23:44:38 2024 00:11:08.293 read: IOPS=18.8k, BW=73.2MiB/s (76.8MB/s)(147MiB/2001msec) 00:11:08.293 slat (nsec): min=4178, max=61763, avg=5921.37, stdev=2815.03 00:11:08.293 clat (usec): min=263, max=10615, avg=3388.12, stdev=1111.21 00:11:08.293 lat (usec): min=268, max=10664, avg=3394.04, stdev=1112.59 00:11:08.293 clat percentiles (usec): 00:11:08.293 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:11:08.293 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3195], 00:11:08.293 | 70.00th=[ 3425], 80.00th=[ 3949], 90.00th=[ 5080], 95.00th=[ 5866], 00:11:08.293 | 99.00th=[ 7177], 99.50th=[ 7898], 99.90th=[ 8979], 99.95th=[ 9896], 00:11:08.293 | 99.99th=[10552] 00:11:08.293 bw ( KiB/s): min=67640, max=85200, per=98.13%, avg=73600.00, stdev=10047.17, samples=3 00:11:08.293 iops : min=16910, max=21300, avg=18400.00, stdev=2511.79, samples=3 00:11:08.293 write: IOPS=18.8k, BW=73.2MiB/s (76.8MB/s)(147MiB/2001msec); 0 zone resets 00:11:08.293 slat (nsec): min=4252, max=82911, avg=6182.82, stdev=3023.08 00:11:08.293 clat (usec): min=238, max=10554, avg=3412.73, stdev=1124.73 00:11:08.293 lat (usec): min=243, max=10565, avg=3418.91, stdev=1126.14 00:11:08.293 clat percentiles (usec): 00:11:08.293 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:11:08.293 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 3032], 60.00th=[ 3195], 00:11:08.293 | 70.00th=[ 3458], 80.00th=[ 3982], 90.00th=[ 5145], 95.00th=[ 5866], 00:11:08.293 | 99.00th=[ 7308], 99.50th=[ 8029], 99.90th=[ 9241], 99.95th=[ 9896], 00:11:08.293 | 99.99th=[10421] 00:11:08.293 bw ( KiB/s): min=67448, max=84960, per=98.01%, avg=73514.67, stdev=9917.92, samples=3 00:11:08.293 iops : min=16862, max=21240, avg=18378.67, stdev=2479.48, samples=3 00:11:08.293 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:08.293 lat (msec) : 2=0.54%, 4=79.65%, 10=19.72%, 20=0.04% 00:11:08.293 cpu : usr=98.90%, sys=0.10%, ctx=6, majf=0, minf=606 00:11:08.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:08.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.293 issued rwts: total=37519,37523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.293 00:11:08.293 Run status group 0 (all jobs): 00:11:08.293 READ: bw=73.2MiB/s (76.8MB/s), 73.2MiB/s-73.2MiB/s (76.8MB/s-76.8MB/s), io=147MiB (154MB), run=2001-2001msec 00:11:08.293 WRITE: bw=73.2MiB/s (76.8MB/s), 73.2MiB/s-73.2MiB/s (76.8MB/s-76.8MB/s), io=147MiB (154MB), run=2001-2001msec 00:11:08.293 ----------------------------------------------------- 00:11:08.293 Suppressions used: 00:11:08.293 count bytes template 00:11:08.293 1 32 /usr/src/fio/parse.c 00:11:08.293 1 8 libtcmalloc_minimal.so 00:11:08.293 ----------------------------------------------------- 00:11:08.293 00:11:08.294 23:44:38 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:08.294 23:44:38 -- nvme/nvme.sh@46 -- # true 00:11:08.294 00:11:08.294 real 0m28.838s 00:11:08.294 user 0m20.180s 00:11:08.294 sys 0m14.261s 00:11:08.294 23:44:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.294 ************************************ 00:11:08.294 END TEST nvme_fio 00:11:08.294 23:44:38 -- common/autotest_common.sh@10 -- # set +x 00:11:08.294 ************************************ 00:11:08.554 00:11:08.554 real 1m43.722s 00:11:08.554 user 3m45.622s 00:11:08.554 sys 0m24.706s 00:11:08.554 23:44:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.554 ************************************ 00:11:08.554 END TEST nvme 00:11:08.554 ************************************ 00:11:08.554 23:44:39 -- common/autotest_common.sh@10 -- # set +x 00:11:08.554 23:44:39 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:08.554 23:44:39 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:08.554 23:44:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:08.554 23:44:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.555 23:44:39 -- common/autotest_common.sh@10 -- # set +x 00:11:08.555 ************************************ 00:11:08.555 START TEST nvme_scc 00:11:08.555 ************************************ 00:11:08.555 23:44:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:08.555 * Looking for test storage... 00:11:08.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.555 23:44:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:08.555 23:44:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:08.555 23:44:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:08.555 23:44:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:08.555 23:44:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:08.555 23:44:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:08.555 23:44:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:08.555 23:44:39 -- scripts/common.sh@335 -- # IFS=.-: 00:11:08.555 23:44:39 -- scripts/common.sh@335 -- # read -ra ver1 00:11:08.555 23:44:39 -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.555 23:44:39 -- scripts/common.sh@336 -- # read -ra ver2 00:11:08.555 23:44:39 -- scripts/common.sh@337 -- # local 'op=<' 00:11:08.555 23:44:39 -- scripts/common.sh@339 -- # ver1_l=2 00:11:08.555 23:44:39 -- scripts/common.sh@340 -- # ver2_l=1 00:11:08.555 23:44:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:08.555 23:44:39 -- scripts/common.sh@343 -- # case "$op" in 00:11:08.555 23:44:39 -- scripts/common.sh@344 -- # : 1 00:11:08.555 23:44:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:08.555 23:44:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.555 23:44:39 -- scripts/common.sh@364 -- # decimal 1 00:11:08.555 23:44:39 -- scripts/common.sh@352 -- # local d=1 00:11:08.555 23:44:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.555 23:44:39 -- scripts/common.sh@354 -- # echo 1 00:11:08.555 23:44:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:08.555 23:44:39 -- scripts/common.sh@365 -- # decimal 2 00:11:08.555 23:44:39 -- scripts/common.sh@352 -- # local d=2 00:11:08.555 23:44:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.555 23:44:39 -- scripts/common.sh@354 -- # echo 2 00:11:08.555 23:44:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:08.555 23:44:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:08.555 23:44:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:08.555 23:44:39 -- scripts/common.sh@367 -- # return 0 00:11:08.555 23:44:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.555 23:44:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:08.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.555 --rc genhtml_branch_coverage=1 00:11:08.555 --rc genhtml_function_coverage=1 00:11:08.555 --rc genhtml_legend=1 00:11:08.555 --rc geninfo_all_blocks=1 00:11:08.555 --rc geninfo_unexecuted_blocks=1 00:11:08.555 00:11:08.555 ' 00:11:08.555 23:44:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:08.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.555 --rc genhtml_branch_coverage=1 00:11:08.555 --rc genhtml_function_coverage=1 00:11:08.555 --rc genhtml_legend=1 00:11:08.555 --rc geninfo_all_blocks=1 00:11:08.555 --rc geninfo_unexecuted_blocks=1 00:11:08.555 00:11:08.555 ' 00:11:08.555 23:44:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:08.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.555 --rc genhtml_branch_coverage=1 00:11:08.555 --rc genhtml_function_coverage=1 00:11:08.555 --rc genhtml_legend=1 00:11:08.555 --rc geninfo_all_blocks=1 00:11:08.555 --rc geninfo_unexecuted_blocks=1 00:11:08.555 00:11:08.555 ' 00:11:08.555 23:44:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:08.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.555 --rc genhtml_branch_coverage=1 00:11:08.555 --rc genhtml_function_coverage=1 00:11:08.555 --rc genhtml_legend=1 00:11:08.555 --rc geninfo_all_blocks=1 00:11:08.555 --rc geninfo_unexecuted_blocks=1 00:11:08.555 00:11:08.555 ' 00:11:08.555 23:44:39 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:08.555 23:44:39 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:08.555 23:44:39 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:08.555 23:44:39 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:08.555 23:44:39 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.555 23:44:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.555 23:44:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.555 23:44:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.555 23:44:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.555 23:44:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.555 23:44:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.555 23:44:39 -- paths/export.sh@5 -- # export PATH 00:11:08.555 23:44:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.555 23:44:39 -- nvme/functions.sh@10 -- # ctrls=() 00:11:08.555 23:44:39 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:08.555 23:44:39 -- nvme/functions.sh@11 -- # nvmes=() 00:11:08.555 23:44:39 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:08.555 23:44:39 -- nvme/functions.sh@12 -- # bdfs=() 00:11:08.555 23:44:39 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:08.555 23:44:39 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:08.555 23:44:39 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:08.555 23:44:39 -- nvme/functions.sh@14 -- # nvme_name= 00:11:08.555 23:44:39 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.555 23:44:39 -- nvme/nvme_scc.sh@12 -- # uname 00:11:08.816 23:44:39 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:08.816 23:44:39 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:08.816 23:44:39 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:09.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.075 Waiting for block devices as requested 00:11:09.336 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.336 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.336 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.596 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.899 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:14.899 23:44:45 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:14.899 23:44:45 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:14.899 23:44:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.899 23:44:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:14.899 23:44:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:14.899 23:44:45 -- scripts/common.sh@15 -- # local i 00:11:14.899 23:44:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:14.899 23:44:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.899 23:44:45 -- scripts/common.sh@24 -- # return 0 00:11:14.899 23:44:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:14.899 23:44:45 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:14.899 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.899 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.899 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:14.899 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.899 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.900 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.900 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.900 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.901 23:44:45 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.901 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.901 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:14.902 23:44:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:14.902 23:44:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:14.902 23:44:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:14.902 23:44:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:14.902 23:44:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.902 23:44:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:14.902 23:44:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:14.902 23:44:45 -- scripts/common.sh@15 -- # local i 00:11:14.902 23:44:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:14.902 23:44:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.902 23:44:45 -- scripts/common.sh@24 -- # return 0 00:11:14.902 23:44:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:14.902 23:44:45 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:14.902 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.902 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.902 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.902 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:14.902 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.903 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.903 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.903 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.904 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:14.904 23:44:45 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.904 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:14.905 23:44:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.905 23:44:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:14.905 23:44:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:14.905 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.905 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.905 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:14.905 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.905 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:14.906 23:44:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.906 23:44:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:14.906 23:44:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:14.906 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.906 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.906 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.906 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:14.906 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.907 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:14.907 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:14.907 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:14.908 23:44:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.908 23:44:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:14.908 23:44:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:14.908 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.908 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.908 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:14.908 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:14.908 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.909 23:44:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.909 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.909 23:44:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:14.909 23:44:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:14.909 23:44:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:14.909 23:44:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:14.909 23:44:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:14.909 23:44:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.910 23:44:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:14.910 23:44:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:14.910 23:44:45 -- scripts/common.sh@15 -- # local i 00:11:14.910 23:44:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:14.910 23:44:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.910 23:44:45 -- scripts/common.sh@24 -- # return 0 00:11:14.910 23:44:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:14.910 23:44:45 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:14.910 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.910 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:14.910 23:44:45 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.910 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.910 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.911 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.911 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:14.911 23:44:45 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:14.912 23:44:45 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.912 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.912 23:44:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:14.913 23:44:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.913 23:44:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:14.913 23:44:45 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:14.913 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.913 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.913 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.913 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.913 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:14.914 23:44:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:14.914 23:44:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:14.914 23:44:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:14.914 23:44:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:14.914 23:44:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.914 23:44:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:14.914 23:44:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:14.914 23:44:45 -- scripts/common.sh@15 -- # local i 00:11:14.914 23:44:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:14.914 23:44:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.914 23:44:45 -- scripts/common.sh@24 -- # return 0 00:11:14.914 23:44:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:14.914 23:44:45 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:14.914 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.914 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.914 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:14.914 23:44:45 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:14.914 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:14.915 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.915 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.915 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.916 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:14.916 23:44:45 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:14.916 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:14.917 23:44:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.917 23:44:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:14.917 23:44:45 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:14.917 23:44:45 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@18 -- # shift 00:11:14.917 23:44:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.917 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.917 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.917 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.918 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.918 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.918 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.919 23:44:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.919 23:44:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.919 23:44:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:14.919 23:44:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:14.919 23:44:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:14.919 23:44:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:14.919 23:44:45 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:14.919 23:44:45 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:14.919 23:44:45 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:14.919 23:44:45 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:14.919 23:44:45 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:14.919 23:44:45 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:14.919 23:44:45 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:14.919 23:44:45 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:14.919 23:44:45 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:14.919 23:44:45 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:14.919 23:44:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:14.919 23:44:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # echo nvme1 00:11:14.919 23:44:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:14.919 23:44:45 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:14.919 23:44:45 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:14.919 23:44:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:14.919 23:44:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # echo nvme0 00:11:14.919 23:44:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # echo nvme3 00:11:14.919 23:44:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:14.919 23:44:45 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:14.919 23:44:45 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:14.919 23:44:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:14.919 23:44:45 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:14.919 23:44:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.919 23:44:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.919 23:44:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.919 23:44:45 -- nvme/functions.sh@197 -- # echo nvme2 00:11:14.919 23:44:45 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:14.919 23:44:45 -- nvme/functions.sh@206 -- # echo nvme1 00:11:14.919 23:44:45 -- nvme/functions.sh@207 -- # return 0 00:11:14.919 23:44:45 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:14.919 23:44:45 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:14.919 23:44:45 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.122 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.122 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.122 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.122 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.122 23:44:46 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:16.122 23:44:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:16.122 23:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.122 23:44:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.122 ************************************ 00:11:16.122 START TEST nvme_simple_copy 00:11:16.122 ************************************ 00:11:16.123 23:44:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:16.383 Initializing NVMe Controllers 00:11:16.383 Attaching to 0000:00:08.0 00:11:16.383 Controller supports SCC. Attached to 0000:00:08.0 00:11:16.383 Namespace ID: 1 size: 4GB 00:11:16.383 Initialization complete. 00:11:16.383 00:11:16.383 Controller QEMU NVMe Ctrl (12342 ) 00:11:16.383 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:16.383 Namespace Block Size:4096 00:11:16.383 Writing LBAs 0 to 63 with Random Data 00:11:16.383 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:16.383 LBAs matching Written Data: 64 00:11:16.383 ************************************ 00:11:16.383 END TEST nvme_simple_copy 00:11:16.383 ************************************ 00:11:16.383 00:11:16.383 real 0m0.279s 00:11:16.383 user 0m0.106s 00:11:16.383 sys 0m0.070s 00:11:16.383 23:44:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.383 23:44:47 -- common/autotest_common.sh@10 -- # set +x 00:11:16.383 ************************************ 00:11:16.383 END TEST nvme_scc 00:11:16.383 ************************************ 00:11:16.383 00:11:16.383 real 0m7.988s 00:11:16.383 user 0m1.151s 00:11:16.383 sys 0m1.571s 00:11:16.383 23:44:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.383 23:44:47 -- common/autotest_common.sh@10 -- # set +x 00:11:16.645 23:44:47 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:16.645 23:44:47 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:16.645 23:44:47 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:16.645 23:44:47 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:16.645 23:44:47 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:16.645 23:44:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.645 23:44:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.645 23:44:47 -- common/autotest_common.sh@10 -- # set +x 00:11:16.645 ************************************ 00:11:16.645 START TEST nvme_fdp 00:11:16.645 ************************************ 00:11:16.645 23:44:47 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:16.645 * Looking for test storage... 00:11:16.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:16.646 23:44:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:16.646 23:44:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:16.646 23:44:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:16.646 23:44:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:16.646 23:44:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:16.646 23:44:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:16.646 23:44:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:16.646 23:44:47 -- scripts/common.sh@335 -- # IFS=.-: 00:11:16.646 23:44:47 -- scripts/common.sh@335 -- # read -ra ver1 00:11:16.646 23:44:47 -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.646 23:44:47 -- scripts/common.sh@336 -- # read -ra ver2 00:11:16.646 23:44:47 -- scripts/common.sh@337 -- # local 'op=<' 00:11:16.646 23:44:47 -- scripts/common.sh@339 -- # ver1_l=2 00:11:16.646 23:44:47 -- scripts/common.sh@340 -- # ver2_l=1 00:11:16.646 23:44:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:16.646 23:44:47 -- scripts/common.sh@343 -- # case "$op" in 00:11:16.646 23:44:47 -- scripts/common.sh@344 -- # : 1 00:11:16.646 23:44:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:16.646 23:44:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.646 23:44:47 -- scripts/common.sh@364 -- # decimal 1 00:11:16.646 23:44:47 -- scripts/common.sh@352 -- # local d=1 00:11:16.646 23:44:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.646 23:44:47 -- scripts/common.sh@354 -- # echo 1 00:11:16.646 23:44:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:16.646 23:44:47 -- scripts/common.sh@365 -- # decimal 2 00:11:16.646 23:44:47 -- scripts/common.sh@352 -- # local d=2 00:11:16.646 23:44:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.646 23:44:47 -- scripts/common.sh@354 -- # echo 2 00:11:16.646 23:44:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:16.646 23:44:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:16.646 23:44:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:16.646 23:44:47 -- scripts/common.sh@367 -- # return 0 00:11:16.646 23:44:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.646 23:44:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.646 --rc genhtml_branch_coverage=1 00:11:16.646 --rc genhtml_function_coverage=1 00:11:16.646 --rc genhtml_legend=1 00:11:16.646 --rc geninfo_all_blocks=1 00:11:16.646 --rc geninfo_unexecuted_blocks=1 00:11:16.646 00:11:16.646 ' 00:11:16.646 23:44:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.646 --rc genhtml_branch_coverage=1 00:11:16.646 --rc genhtml_function_coverage=1 00:11:16.646 --rc genhtml_legend=1 00:11:16.646 --rc geninfo_all_blocks=1 00:11:16.646 --rc geninfo_unexecuted_blocks=1 00:11:16.646 00:11:16.646 ' 00:11:16.646 23:44:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.646 --rc genhtml_branch_coverage=1 00:11:16.646 --rc genhtml_function_coverage=1 00:11:16.646 --rc genhtml_legend=1 00:11:16.646 --rc geninfo_all_blocks=1 00:11:16.646 --rc geninfo_unexecuted_blocks=1 00:11:16.646 00:11:16.646 ' 00:11:16.646 23:44:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.646 --rc genhtml_branch_coverage=1 00:11:16.646 --rc genhtml_function_coverage=1 00:11:16.646 --rc genhtml_legend=1 00:11:16.646 --rc geninfo_all_blocks=1 00:11:16.646 --rc geninfo_unexecuted_blocks=1 00:11:16.646 00:11:16.646 ' 00:11:16.646 23:44:47 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.646 23:44:47 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.646 23:44:47 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:16.646 23:44:47 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:16.646 23:44:47 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.646 23:44:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.646 23:44:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.646 23:44:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.646 23:44:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.646 23:44:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.646 23:44:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.646 23:44:47 -- paths/export.sh@5 -- # export PATH 00:11:16.646 23:44:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.646 23:44:47 -- nvme/functions.sh@10 -- # ctrls=() 00:11:16.646 23:44:47 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:16.646 23:44:47 -- nvme/functions.sh@11 -- # nvmes=() 00:11:16.646 23:44:47 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:16.646 23:44:47 -- nvme/functions.sh@12 -- # bdfs=() 00:11:16.646 23:44:47 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:16.646 23:44:47 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:16.646 23:44:47 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:16.646 23:44:47 -- nvme/functions.sh@14 -- # nvme_name= 00:11:16.646 23:44:47 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.646 23:44:47 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.219 Waiting for block devices as requested 00:11:17.219 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.480 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.480 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.480 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:22.783 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:22.783 23:44:53 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:22.783 23:44:53 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:22.783 23:44:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.783 23:44:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:22.783 23:44:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:22.783 23:44:53 -- scripts/common.sh@15 -- # local i 00:11:22.783 23:44:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:22.783 23:44:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.783 23:44:53 -- scripts/common.sh@24 -- # return 0 00:11:22.783 23:44:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:22.783 23:44:53 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:22.783 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.783 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:22.783 23:44:53 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.783 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.783 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.784 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:22.784 23:44:53 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:22.784 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:22.785 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.785 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.785 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:22.786 23:44:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:22.786 23:44:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:22.786 23:44:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:22.786 23:44:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:22.786 23:44:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.786 23:44:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:22.786 23:44:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:22.786 23:44:53 -- scripts/common.sh@15 -- # local i 00:11:22.786 23:44:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:22.786 23:44:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.786 23:44:53 -- scripts/common.sh@24 -- # return 0 00:11:22.786 23:44:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:22.786 23:44:53 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:22.786 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.786 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:22.786 23:44:53 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.786 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.786 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.787 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:22.787 23:44:53 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:22.787 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.788 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.788 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.788 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:22.789 23:44:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.789 23:44:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:22.789 23:44:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:22.789 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.789 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.789 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:22.789 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.789 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:22.790 23:44:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.790 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.790 23:44:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:22.790 23:44:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.791 23:44:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:22.791 23:44:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:22.791 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.791 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.791 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:22.791 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:22.791 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:22.792 23:44:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.792 23:44:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:22.792 23:44:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:22.792 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.792 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.792 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.792 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:22.792 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.793 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.793 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.793 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:22.794 23:44:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:22.794 23:44:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:22.794 23:44:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:22.794 23:44:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:22.794 23:44:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.794 23:44:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:22.794 23:44:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:22.794 23:44:53 -- scripts/common.sh@15 -- # local i 00:11:22.794 23:44:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:22.794 23:44:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.794 23:44:53 -- scripts/common.sh@24 -- # return 0 00:11:22.794 23:44:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:22.794 23:44:53 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:22.794 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.794 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.794 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:22.794 23:44:53 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:22.794 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.795 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.795 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:22.795 23:44:53 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:22.796 23:44:53 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.796 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.796 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:22.797 23:44:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.797 23:44:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:22.797 23:44:53 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:22.797 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.797 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.797 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:22.797 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.797 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:22.798 23:44:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:22.798 23:44:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:22.798 23:44:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:22.798 23:44:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:22.798 23:44:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:22.798 23:44:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.798 23:44:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:22.798 23:44:53 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:22.798 23:44:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:22.798 23:44:53 -- scripts/common.sh@15 -- # local i 00:11:22.798 23:44:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:22.798 23:44:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.798 23:44:53 -- scripts/common.sh@24 -- # return 0 00:11:22.798 23:44:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:22.798 23:44:53 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:22.798 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:22.798 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.798 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.798 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.062 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.062 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.062 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.063 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.063 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:23.063 23:44:53 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:23.064 23:44:53 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.064 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.064 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:23.065 23:44:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.065 23:44:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:23.065 23:44:53 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:23.065 23:44:53 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@18 -- # shift 00:11:23.065 23:44:53 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.065 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:23.065 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:23.065 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.066 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.066 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.066 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.067 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.067 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.067 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.067 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.067 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.067 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.067 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.067 23:44:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.067 23:44:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.067 23:44:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.067 23:44:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.067 23:44:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:23.067 23:44:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:23.067 23:44:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:23.067 23:44:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:23.067 23:44:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:23.067 23:44:53 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:23.067 23:44:53 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:23.067 23:44:53 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:23.067 23:44:53 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:23.067 23:44:53 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:23.067 23:44:53 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:23.067 23:44:53 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:23.067 23:44:53 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:23.067 23:44:53 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.067 23:44:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:23.067 23:44:53 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:23.067 23:44:53 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:23.067 23:44:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:23.067 23:44:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.067 23:44:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.067 23:44:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.067 23:44:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:23.067 23:44:53 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:23.067 23:44:53 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:23.067 23:44:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:23.067 23:44:53 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:23.067 23:44:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.067 23:44:53 -- nvme/functions.sh@197 -- # echo nvme0 00:11:23.067 23:44:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.067 23:44:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:23.067 23:44:53 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:23.067 23:44:53 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:23.067 23:44:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:23.067 23:44:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.067 23:44:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.067 23:44:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.067 23:44:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:23.067 23:44:53 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:23.067 23:44:53 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:23.067 23:44:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:23.067 23:44:53 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:23.067 23:44:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.067 23:44:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.067 23:44:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.067 23:44:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.067 23:44:53 -- nvme/functions.sh@204 -- # trap - ERR 00:11:23.067 23:44:53 -- nvme/functions.sh@204 -- # print_backtrace 00:11:23.067 23:44:53 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:23.067 23:44:53 -- common/autotest_common.sh@1142 -- # return 0 00:11:23.067 23:44:53 -- nvme/functions.sh@204 -- # trap - ERR 00:11:23.067 23:44:53 -- nvme/functions.sh@204 -- # print_backtrace 00:11:23.067 23:44:53 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:23.067 23:44:53 -- common/autotest_common.sh@1142 -- # return 0 00:11:23.067 23:44:53 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:23.067 23:44:53 -- nvme/functions.sh@206 -- # echo nvme0 00:11:23.067 23:44:53 -- nvme/functions.sh@207 -- # return 0 00:11:23.067 23:44:53 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:23.067 23:44:53 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:23.067 23:44:53 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.011 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.011 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.011 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.011 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.270 23:44:54 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:24.270 23:44:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:24.270 23:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.270 23:44:54 -- common/autotest_common.sh@10 -- # set +x 00:11:24.270 ************************************ 00:11:24.270 START TEST nvme_flexible_data_placement 00:11:24.270 ************************************ 00:11:24.270 23:44:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:24.533 Initializing NVMe Controllers 00:11:24.533 Attaching to 0000:00:09.0 00:11:24.533 Controller supports FDP Attached to 0000:00:09.0 00:11:24.533 Namespace ID: 1 Endurance Group ID: 1 00:11:24.533 Initialization complete. 00:11:24.533 00:11:24.533 ================================== 00:11:24.533 == FDP tests for Namespace: #01 == 00:11:24.533 ================================== 00:11:24.533 00:11:24.533 Get Feature: FDP: 00:11:24.533 ================= 00:11:24.533 Enabled: Yes 00:11:24.533 FDP configuration Index: 0 00:11:24.533 00:11:24.533 FDP configurations log page 00:11:24.533 =========================== 00:11:24.533 Number of FDP configurations: 1 00:11:24.533 Version: 0 00:11:24.533 Size: 112 00:11:24.533 FDP Configuration Descriptor: 0 00:11:24.533 Descriptor Size: 96 00:11:24.533 Reclaim Group Identifier format: 2 00:11:24.533 FDP Volatile Write Cache: Not Present 00:11:24.533 FDP Configuration: Valid 00:11:24.533 Vendor Specific Size: 0 00:11:24.533 Number of Reclaim Groups: 2 00:11:24.533 Number of Recalim Unit Handles: 8 00:11:24.533 Max Placement Identifiers: 128 00:11:24.533 Number of Namespaces Suppprted: 256 00:11:24.533 Reclaim unit Nominal Size: 6000000 bytes 00:11:24.533 Estimated Reclaim Unit Time Limit: Not Reported 00:11:24.533 RUH Desc #000: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #001: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #002: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #003: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #004: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #005: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #006: RUH Type: Initially Isolated 00:11:24.533 RUH Desc #007: RUH Type: Initially Isolated 00:11:24.533 00:11:24.533 FDP reclaim unit handle usage log page 00:11:24.533 ====================================== 00:11:24.533 Number of Reclaim Unit Handles: 8 00:11:24.533 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:24.533 RUH Usage Desc #001: RUH Attributes: Unused 00:11:24.533 RUH Usage Desc #002: RUH Attributes: Unused 00:11:24.533 RUH Usage Desc #003: RUH Attributes: Unused 00:11:24.533 RUH Usage Desc #004: RUH Attributes: Unused 00:11:24.533 RUH Usage Desc #005: RUH Attributes: Unused 00:11:24.533 RUH Usage Desc #006: RUH Attributes: Unused 00:11:24.533 RUH Usage Desc #007: RUH Attributes: Unused 00:11:24.533 00:11:24.533 FDP statistics log page 00:11:24.533 ======================= 00:11:24.533 Host bytes with metadata written: 1011650560 00:11:24.533 Media bytes with metadata written: 1011859456 00:11:24.533 Media bytes erased: 0 00:11:24.533 00:11:24.533 FDP Reclaim unit handle status 00:11:24.533 ============================== 00:11:24.533 Number of RUHS descriptors: 2 00:11:24.533 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005b37 00:11:24.533 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:24.533 00:11:24.533 FDP write on placement id: 0 success 00:11:24.533 00:11:24.533 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:24.533 00:11:24.533 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:24.533 00:11:24.533 Get Feature: FDP Events for Placement handle: #0 00:11:24.533 ======================== 00:11:24.533 Number of FDP Events: 6 00:11:24.533 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:24.533 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:24.533 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:24.533 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:24.533 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:24.533 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:24.533 00:11:24.533 FDP events log page 00:11:24.533 =================== 00:11:24.533 Number of FDP events: 1 00:11:24.533 FDP Event #0: 00:11:24.533 Event Type: RU Not Written to Capacity 00:11:24.533 Placement Identifier: Valid 00:11:24.533 NSID: Valid 00:11:24.533 Location: Valid 00:11:24.533 Placement Identifier: 0 00:11:24.533 Event Timestamp: 11 00:11:24.533 Namespace Identifier: 1 00:11:24.533 Reclaim Group Identifier: 0 00:11:24.533 Reclaim Unit Handle Identifier: 0 00:11:24.533 00:11:24.533 FDP test passed 00:11:24.533 ************************************ 00:11:24.533 END TEST nvme_flexible_data_placement 00:11:24.533 ************************************ 00:11:24.533 00:11:24.533 real 0m0.244s 00:11:24.533 user 0m0.080s 00:11:24.533 sys 0m0.061s 00:11:24.533 23:44:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.533 23:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 ************************************ 00:11:24.533 END TEST nvme_fdp 00:11:24.533 ************************************ 00:11:24.533 00:11:24.533 real 0m7.901s 00:11:24.533 user 0m1.170s 00:11:24.533 sys 0m1.478s 00:11:24.533 23:44:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.533 23:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 23:44:55 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:24.533 23:44:55 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:24.533 23:44:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.533 23:44:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.533 23:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 ************************************ 00:11:24.533 START TEST nvme_rpc 00:11:24.533 ************************************ 00:11:24.533 23:44:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:24.533 * Looking for test storage... 00:11:24.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.533 23:44:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:24.533 23:44:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:24.533 23:44:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:24.533 23:44:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:24.533 23:44:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:24.533 23:44:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:24.533 23:44:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:24.533 23:44:55 -- scripts/common.sh@335 -- # IFS=.-: 00:11:24.533 23:44:55 -- scripts/common.sh@335 -- # read -ra ver1 00:11:24.533 23:44:55 -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.533 23:44:55 -- scripts/common.sh@336 -- # read -ra ver2 00:11:24.533 23:44:55 -- scripts/common.sh@337 -- # local 'op=<' 00:11:24.533 23:44:55 -- scripts/common.sh@339 -- # ver1_l=2 00:11:24.533 23:44:55 -- scripts/common.sh@340 -- # ver2_l=1 00:11:24.533 23:44:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:24.533 23:44:55 -- scripts/common.sh@343 -- # case "$op" in 00:11:24.533 23:44:55 -- scripts/common.sh@344 -- # : 1 00:11:24.533 23:44:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:24.533 23:44:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.533 23:44:55 -- scripts/common.sh@364 -- # decimal 1 00:11:24.533 23:44:55 -- scripts/common.sh@352 -- # local d=1 00:11:24.533 23:44:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.533 23:44:55 -- scripts/common.sh@354 -- # echo 1 00:11:24.533 23:44:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:24.533 23:44:55 -- scripts/common.sh@365 -- # decimal 2 00:11:24.533 23:44:55 -- scripts/common.sh@352 -- # local d=2 00:11:24.533 23:44:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.533 23:44:55 -- scripts/common.sh@354 -- # echo 2 00:11:24.533 23:44:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:24.533 23:44:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:24.533 23:44:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:24.533 23:44:55 -- scripts/common.sh@367 -- # return 0 00:11:24.533 23:44:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.533 23:44:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.533 --rc genhtml_branch_coverage=1 00:11:24.533 --rc genhtml_function_coverage=1 00:11:24.533 --rc genhtml_legend=1 00:11:24.533 --rc geninfo_all_blocks=1 00:11:24.533 --rc geninfo_unexecuted_blocks=1 00:11:24.533 00:11:24.533 ' 00:11:24.533 23:44:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.533 --rc genhtml_branch_coverage=1 00:11:24.533 --rc genhtml_function_coverage=1 00:11:24.533 --rc genhtml_legend=1 00:11:24.533 --rc geninfo_all_blocks=1 00:11:24.533 --rc geninfo_unexecuted_blocks=1 00:11:24.533 00:11:24.533 ' 00:11:24.533 23:44:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.533 --rc genhtml_branch_coverage=1 00:11:24.533 --rc genhtml_function_coverage=1 00:11:24.533 --rc genhtml_legend=1 00:11:24.533 --rc geninfo_all_blocks=1 00:11:24.533 --rc geninfo_unexecuted_blocks=1 00:11:24.533 00:11:24.533 ' 00:11:24.533 23:44:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.533 --rc genhtml_branch_coverage=1 00:11:24.533 --rc genhtml_function_coverage=1 00:11:24.533 --rc genhtml_legend=1 00:11:24.533 --rc geninfo_all_blocks=1 00:11:24.533 --rc geninfo_unexecuted_blocks=1 00:11:24.533 00:11:24.533 ' 00:11:24.533 23:44:55 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.533 23:44:55 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:24.533 23:44:55 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:24.533 23:44:55 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:24.533 23:44:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:24.533 23:44:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:24.533 23:44:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:24.533 23:44:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:24.533 23:44:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:24.533 23:44:55 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:24.533 23:44:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:24.791 23:44:55 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:24.791 23:44:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:24.791 23:44:55 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:24.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.791 23:44:55 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:24.791 23:44:55 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66607 00:11:24.791 23:44:55 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:24.791 23:44:55 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66607 00:11:24.791 23:44:55 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:24.791 23:44:55 -- common/autotest_common.sh@829 -- # '[' -z 66607 ']' 00:11:24.791 23:44:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.791 23:44:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.791 23:44:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.791 23:44:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.791 23:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:24.791 [2024-12-13 23:44:55.377938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:24.791 [2024-12-13 23:44:55.378044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66607 ] 00:11:25.048 [2024-12-13 23:44:55.524867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:25.048 [2024-12-13 23:44:55.698231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:25.048 [2024-12-13 23:44:55.698628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.048 [2024-12-13 23:44:55.698797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.420 23:44:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.420 23:44:56 -- common/autotest_common.sh@862 -- # return 0 00:11:26.420 23:44:56 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:26.420 Nvme0n1 00:11:26.420 23:44:57 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:26.420 23:44:57 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:26.680 request: 00:11:26.680 { 00:11:26.680 "filename": "non_existing_file", 00:11:26.680 "bdev_name": "Nvme0n1", 00:11:26.680 "method": "bdev_nvme_apply_firmware", 00:11:26.680 "req_id": 1 00:11:26.680 } 00:11:26.680 Got JSON-RPC error response 00:11:26.680 response: 00:11:26.680 { 00:11:26.680 "code": -32603, 00:11:26.680 "message": "open file failed." 00:11:26.680 } 00:11:26.680 23:44:57 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:26.680 23:44:57 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:26.680 23:44:57 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:26.941 23:44:57 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:26.941 23:44:57 -- nvme/nvme_rpc.sh@40 -- # killprocess 66607 00:11:26.941 23:44:57 -- common/autotest_common.sh@936 -- # '[' -z 66607 ']' 00:11:26.941 23:44:57 -- common/autotest_common.sh@940 -- # kill -0 66607 00:11:26.941 23:44:57 -- common/autotest_common.sh@941 -- # uname 00:11:26.941 23:44:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.941 23:44:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66607 00:11:26.941 23:44:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.941 23:44:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.941 23:44:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66607' 00:11:26.941 killing process with pid 66607 00:11:26.941 23:44:57 -- common/autotest_common.sh@955 -- # kill 66607 00:11:26.941 23:44:57 -- common/autotest_common.sh@960 -- # wait 66607 00:11:28.319 00:11:28.319 real 0m3.736s 00:11:28.319 user 0m7.171s 00:11:28.319 sys 0m0.497s 00:11:28.319 23:44:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:28.319 ************************************ 00:11:28.319 END TEST nvme_rpc 00:11:28.319 ************************************ 00:11:28.319 23:44:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.320 23:44:58 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:28.320 23:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:28.320 23:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.320 23:44:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.320 ************************************ 00:11:28.320 START TEST nvme_rpc_timeouts 00:11:28.320 ************************************ 00:11:28.320 23:44:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:28.320 * Looking for test storage... 00:11:28.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.320 23:44:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:28.320 23:44:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:28.320 23:44:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:28.320 23:44:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:28.320 23:44:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:28.320 23:44:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:28.320 23:44:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:28.320 23:44:59 -- scripts/common.sh@335 -- # IFS=.-: 00:11:28.320 23:44:59 -- scripts/common.sh@335 -- # read -ra ver1 00:11:28.320 23:44:59 -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.320 23:44:59 -- scripts/common.sh@336 -- # read -ra ver2 00:11:28.320 23:44:59 -- scripts/common.sh@337 -- # local 'op=<' 00:11:28.320 23:44:59 -- scripts/common.sh@339 -- # ver1_l=2 00:11:28.320 23:44:59 -- scripts/common.sh@340 -- # ver2_l=1 00:11:28.320 23:44:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:28.320 23:44:59 -- scripts/common.sh@343 -- # case "$op" in 00:11:28.320 23:44:59 -- scripts/common.sh@344 -- # : 1 00:11:28.320 23:44:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:28.320 23:44:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.320 23:44:59 -- scripts/common.sh@364 -- # decimal 1 00:11:28.320 23:44:59 -- scripts/common.sh@352 -- # local d=1 00:11:28.320 23:44:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.320 23:44:59 -- scripts/common.sh@354 -- # echo 1 00:11:28.320 23:44:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:28.320 23:44:59 -- scripts/common.sh@365 -- # decimal 2 00:11:28.320 23:44:59 -- scripts/common.sh@352 -- # local d=2 00:11:28.320 23:44:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.320 23:44:59 -- scripts/common.sh@354 -- # echo 2 00:11:28.320 23:44:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:28.320 23:44:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.320 23:44:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:28.320 23:44:59 -- scripts/common.sh@367 -- # return 0 00:11:28.320 23:44:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.320 23:44:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.320 --rc genhtml_branch_coverage=1 00:11:28.320 --rc genhtml_function_coverage=1 00:11:28.320 --rc genhtml_legend=1 00:11:28.320 --rc geninfo_all_blocks=1 00:11:28.320 --rc geninfo_unexecuted_blocks=1 00:11:28.320 00:11:28.320 ' 00:11:28.320 23:44:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.320 --rc genhtml_branch_coverage=1 00:11:28.320 --rc genhtml_function_coverage=1 00:11:28.320 --rc genhtml_legend=1 00:11:28.320 --rc geninfo_all_blocks=1 00:11:28.320 --rc geninfo_unexecuted_blocks=1 00:11:28.320 00:11:28.320 ' 00:11:28.320 23:44:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.320 --rc genhtml_branch_coverage=1 00:11:28.320 --rc genhtml_function_coverage=1 00:11:28.320 --rc genhtml_legend=1 00:11:28.320 --rc geninfo_all_blocks=1 00:11:28.320 --rc geninfo_unexecuted_blocks=1 00:11:28.320 00:11:28.320 ' 00:11:28.320 23:44:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.320 --rc genhtml_branch_coverage=1 00:11:28.320 --rc genhtml_function_coverage=1 00:11:28.320 --rc genhtml_legend=1 00:11:28.320 --rc geninfo_all_blocks=1 00:11:28.320 --rc geninfo_unexecuted_blocks=1 00:11:28.320 00:11:28.320 ' 00:11:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66689 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66689 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66720 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66720 00:11:28.320 23:44:59 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:28.320 23:44:59 -- common/autotest_common.sh@829 -- # '[' -z 66720 ']' 00:11:28.320 23:44:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.320 23:44:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.320 23:44:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.320 23:44:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.320 23:44:59 -- common/autotest_common.sh@10 -- # set +x 00:11:28.581 [2024-12-13 23:44:59.125437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.581 [2024-12-13 23:44:59.125862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66720 ] 00:11:28.581 [2024-12-13 23:44:59.277134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.841 [2024-12-13 23:44:59.465958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.841 [2024-12-13 23:44:59.466673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.841 [2024-12-13 23:44:59.466839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.220 23:45:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.220 23:45:00 -- common/autotest_common.sh@862 -- # return 0 00:11:30.220 23:45:00 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:30.220 Checking default timeout settings: 00:11:30.220 23:45:00 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:30.220 Making settings changes with rpc: 00:11:30.220 23:45:00 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:30.220 23:45:00 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:30.480 Check default vs. modified settings: 00:11:30.480 23:45:01 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:30.480 23:45:01 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66689 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66689 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.740 Setting action_on_timeout is changed as expected. 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66689 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66689 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.740 Setting timeout_us is changed as expected. 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66689 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66689 00:11:30.740 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.741 Setting timeout_admin_us is changed as expected. 00:11:30.741 23:45:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:30.741 23:45:01 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:30.741 23:45:01 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:30.741 23:45:01 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:30.741 23:45:01 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66689 /tmp/settings_modified_66689 00:11:30.741 23:45:01 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66720 00:11:30.741 23:45:01 -- common/autotest_common.sh@936 -- # '[' -z 66720 ']' 00:11:30.741 23:45:01 -- common/autotest_common.sh@940 -- # kill -0 66720 00:11:30.741 23:45:01 -- common/autotest_common.sh@941 -- # uname 00:11:30.741 23:45:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.741 23:45:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66720 00:11:30.741 killing process with pid 66720 00:11:30.741 23:45:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:30.741 23:45:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:30.741 23:45:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66720' 00:11:30.741 23:45:01 -- common/autotest_common.sh@955 -- # kill 66720 00:11:30.741 23:45:01 -- common/autotest_common.sh@960 -- # wait 66720 00:11:32.121 RPC TIMEOUT SETTING TEST PASSED. 00:11:32.121 23:45:02 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:32.121 ************************************ 00:11:32.121 END TEST nvme_rpc_timeouts 00:11:32.121 ************************************ 00:11:32.121 00:11:32.121 real 0m3.801s 00:11:32.121 user 0m7.306s 00:11:32.121 sys 0m0.549s 00:11:32.121 23:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:32.121 23:45:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.121 23:45:02 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:32.121 23:45:02 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:32.121 23:45:02 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:32.121 23:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:32.121 23:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.121 23:45:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.121 ************************************ 00:11:32.121 START TEST nvme_xnvme 00:11:32.121 ************************************ 00:11:32.121 23:45:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:32.121 * Looking for test storage... 00:11:32.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:32.121 23:45:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:32.121 23:45:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:32.121 23:45:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:32.383 23:45:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:32.383 23:45:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:32.383 23:45:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:32.383 23:45:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:32.383 23:45:02 -- scripts/common.sh@335 -- # IFS=.-: 00:11:32.383 23:45:02 -- scripts/common.sh@335 -- # read -ra ver1 00:11:32.383 23:45:02 -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.383 23:45:02 -- scripts/common.sh@336 -- # read -ra ver2 00:11:32.383 23:45:02 -- scripts/common.sh@337 -- # local 'op=<' 00:11:32.383 23:45:02 -- scripts/common.sh@339 -- # ver1_l=2 00:11:32.383 23:45:02 -- scripts/common.sh@340 -- # ver2_l=1 00:11:32.383 23:45:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:32.383 23:45:02 -- scripts/common.sh@343 -- # case "$op" in 00:11:32.383 23:45:02 -- scripts/common.sh@344 -- # : 1 00:11:32.383 23:45:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:32.383 23:45:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.383 23:45:02 -- scripts/common.sh@364 -- # decimal 1 00:11:32.383 23:45:02 -- scripts/common.sh@352 -- # local d=1 00:11:32.383 23:45:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.383 23:45:02 -- scripts/common.sh@354 -- # echo 1 00:11:32.383 23:45:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:32.383 23:45:02 -- scripts/common.sh@365 -- # decimal 2 00:11:32.383 23:45:02 -- scripts/common.sh@352 -- # local d=2 00:11:32.383 23:45:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.383 23:45:02 -- scripts/common.sh@354 -- # echo 2 00:11:32.383 23:45:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:32.383 23:45:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:32.383 23:45:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:32.383 23:45:02 -- scripts/common.sh@367 -- # return 0 00:11:32.383 23:45:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.383 23:45:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:32.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.383 --rc genhtml_branch_coverage=1 00:11:32.383 --rc genhtml_function_coverage=1 00:11:32.383 --rc genhtml_legend=1 00:11:32.383 --rc geninfo_all_blocks=1 00:11:32.383 --rc geninfo_unexecuted_blocks=1 00:11:32.383 00:11:32.383 ' 00:11:32.383 23:45:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:32.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.383 --rc genhtml_branch_coverage=1 00:11:32.383 --rc genhtml_function_coverage=1 00:11:32.383 --rc genhtml_legend=1 00:11:32.383 --rc geninfo_all_blocks=1 00:11:32.383 --rc geninfo_unexecuted_blocks=1 00:11:32.383 00:11:32.383 ' 00:11:32.384 23:45:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.384 --rc genhtml_branch_coverage=1 00:11:32.384 --rc genhtml_function_coverage=1 00:11:32.384 --rc genhtml_legend=1 00:11:32.384 --rc geninfo_all_blocks=1 00:11:32.384 --rc geninfo_unexecuted_blocks=1 00:11:32.384 00:11:32.384 ' 00:11:32.384 23:45:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.384 --rc genhtml_branch_coverage=1 00:11:32.384 --rc genhtml_function_coverage=1 00:11:32.384 --rc genhtml_legend=1 00:11:32.384 --rc geninfo_all_blocks=1 00:11:32.384 --rc geninfo_unexecuted_blocks=1 00:11:32.384 00:11:32.384 ' 00:11:32.384 23:45:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.384 23:45:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.384 23:45:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.384 23:45:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.384 23:45:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.384 23:45:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.384 23:45:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.384 23:45:02 -- paths/export.sh@5 -- # export PATH 00:11:32.384 23:45:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:32.384 23:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:32.384 23:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.384 23:45:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.384 ************************************ 00:11:32.384 START TEST xnvme_to_malloc_dd_copy 00:11:32.384 ************************************ 00:11:32.384 23:45:02 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:32.384 23:45:02 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:32.384 23:45:02 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:32.384 23:45:02 -- dd/common.sh@191 -- # return 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@18 -- # local io 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:32.384 23:45:02 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:32.384 23:45:02 -- dd/common.sh@31 -- # xtrace_disable 00:11:32.384 23:45:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.384 { 00:11:32.384 "subsystems": [ 00:11:32.384 { 00:11:32.384 "subsystem": "bdev", 00:11:32.384 "config": [ 00:11:32.384 { 00:11:32.384 "params": { 00:11:32.384 "block_size": 512, 00:11:32.384 "num_blocks": 2097152, 00:11:32.384 "name": "malloc0" 00:11:32.384 }, 00:11:32.384 "method": "bdev_malloc_create" 00:11:32.384 }, 00:11:32.384 { 00:11:32.384 "params": { 00:11:32.384 "io_mechanism": "libaio", 00:11:32.384 "filename": "/dev/nullb0", 00:11:32.384 "name": "null0" 00:11:32.384 }, 00:11:32.384 "method": "bdev_xnvme_create" 00:11:32.384 }, 00:11:32.384 { 00:11:32.384 "method": "bdev_wait_for_examine" 00:11:32.384 } 00:11:32.384 ] 00:11:32.384 } 00:11:32.384 ] 00:11:32.384 } 00:11:32.384 [2024-12-13 23:45:03.012082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:32.384 [2024-12-13 23:45:03.012359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66858 ] 00:11:32.645 [2024-12-13 23:45:03.167126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.906 [2024-12-13 23:45:03.384957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.823  [2024-12-13T23:45:06.505Z] Copying: 232/1024 [MB] (232 MBps) [2024-12-13T23:45:07.884Z] Copying: 464/1024 [MB] (232 MBps) [2024-12-13T23:45:08.451Z] Copying: 759/1024 [MB] (295 MBps) [2024-12-13T23:45:10.354Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:11:39.622 00:11:39.622 23:45:10 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:39.622 23:45:10 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:39.622 23:45:10 -- dd/common.sh@31 -- # xtrace_disable 00:11:39.622 23:45:10 -- common/autotest_common.sh@10 -- # set +x 00:11:39.881 { 00:11:39.881 "subsystems": [ 00:11:39.881 { 00:11:39.881 "subsystem": "bdev", 00:11:39.881 "config": [ 00:11:39.881 { 00:11:39.881 "params": { 00:11:39.881 "block_size": 512, 00:11:39.881 "num_blocks": 2097152, 00:11:39.881 "name": "malloc0" 00:11:39.881 }, 00:11:39.881 "method": "bdev_malloc_create" 00:11:39.881 }, 00:11:39.881 { 00:11:39.881 "params": { 00:11:39.881 "io_mechanism": "libaio", 00:11:39.881 "filename": "/dev/nullb0", 00:11:39.881 "name": "null0" 00:11:39.881 }, 00:11:39.881 "method": "bdev_xnvme_create" 00:11:39.881 }, 00:11:39.881 { 00:11:39.881 "method": "bdev_wait_for_examine" 00:11:39.881 } 00:11:39.881 ] 00:11:39.881 } 00:11:39.881 ] 00:11:39.881 } 00:11:39.881 [2024-12-13 23:45:10.406980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:39.882 [2024-12-13 23:45:10.407113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66946 ] 00:11:39.882 [2024-12-13 23:45:10.554564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.139 [2024-12-13 23:45:10.693853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.042  [2024-12-13T23:45:13.710Z] Copying: 312/1024 [MB] (312 MBps) [2024-12-13T23:45:14.644Z] Copying: 625/1024 [MB] (313 MBps) [2024-12-13T23:45:14.902Z] Copying: 937/1024 [MB] (312 MBps) [2024-12-13T23:45:16.804Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:11:46.072 00:11:46.072 23:45:16 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:46.072 23:45:16 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:46.072 23:45:16 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:46.072 23:45:16 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:46.072 23:45:16 -- dd/common.sh@31 -- # xtrace_disable 00:11:46.072 23:45:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.072 { 00:11:46.072 "subsystems": [ 00:11:46.072 { 00:11:46.072 "subsystem": "bdev", 00:11:46.072 "config": [ 00:11:46.072 { 00:11:46.072 "params": { 00:11:46.072 "block_size": 512, 00:11:46.072 "num_blocks": 2097152, 00:11:46.072 "name": "malloc0" 00:11:46.072 }, 00:11:46.072 "method": "bdev_malloc_create" 00:11:46.072 }, 00:11:46.072 { 00:11:46.072 "params": { 00:11:46.072 "io_mechanism": "io_uring", 00:11:46.072 "filename": "/dev/nullb0", 00:11:46.072 "name": "null0" 00:11:46.072 }, 00:11:46.072 "method": "bdev_xnvme_create" 00:11:46.072 }, 00:11:46.072 { 00:11:46.072 "method": "bdev_wait_for_examine" 00:11:46.072 } 00:11:46.072 ] 00:11:46.072 } 00:11:46.072 ] 00:11:46.072 } 00:11:46.072 [2024-12-13 23:45:16.785963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:46.072 [2024-12-13 23:45:16.786074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67022 ] 00:11:46.332 [2024-12-13 23:45:16.928196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.589 [2024-12-13 23:45:17.076691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.490  [2024-12-13T23:45:20.156Z] Copying: 322/1024 [MB] (322 MBps) [2024-12-13T23:45:21.092Z] Copying: 645/1024 [MB] (322 MBps) [2024-12-13T23:45:21.092Z] Copying: 966/1024 [MB] (321 MBps) [2024-12-13T23:45:22.995Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:11:52.263 00:11:52.263 23:45:22 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:52.263 23:45:22 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:52.263 23:45:22 -- dd/common.sh@31 -- # xtrace_disable 00:11:52.263 23:45:22 -- common/autotest_common.sh@10 -- # set +x 00:11:52.525 { 00:11:52.525 "subsystems": [ 00:11:52.525 { 00:11:52.525 "subsystem": "bdev", 00:11:52.525 "config": [ 00:11:52.525 { 00:11:52.525 "params": { 00:11:52.525 "block_size": 512, 00:11:52.525 "num_blocks": 2097152, 00:11:52.525 "name": "malloc0" 00:11:52.525 }, 00:11:52.525 "method": "bdev_malloc_create" 00:11:52.525 }, 00:11:52.525 { 00:11:52.525 "params": { 00:11:52.525 "io_mechanism": "io_uring", 00:11:52.525 "filename": "/dev/nullb0", 00:11:52.525 "name": "null0" 00:11:52.525 }, 00:11:52.525 "method": "bdev_xnvme_create" 00:11:52.525 }, 00:11:52.525 { 00:11:52.525 "method": "bdev_wait_for_examine" 00:11:52.525 } 00:11:52.525 ] 00:11:52.525 } 00:11:52.525 ] 00:11:52.525 } 00:11:52.525 [2024-12-13 23:45:23.034619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:52.525 [2024-12-13 23:45:23.034919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67098 ] 00:11:52.525 [2024-12-13 23:45:23.185596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.783 [2024-12-13 23:45:23.320663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.681  [2024-12-13T23:45:26.346Z] Copying: 326/1024 [MB] (326 MBps) [2024-12-13T23:45:27.281Z] Copying: 652/1024 [MB] (326 MBps) [2024-12-13T23:45:27.281Z] Copying: 978/1024 [MB] (326 MBps) [2024-12-13T23:45:29.184Z] Copying: 1024/1024 [MB] (average 326 MBps) 00:11:58.452 00:11:58.452 23:45:29 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:58.713 23:45:29 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:58.713 00:11:58.713 real 0m26.305s 00:11:58.713 user 0m23.212s 00:11:58.713 sys 0m2.562s 00:11:58.713 23:45:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:58.713 ************************************ 00:11:58.713 END TEST xnvme_to_malloc_dd_copy 00:11:58.713 ************************************ 00:11:58.713 23:45:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:58.713 23:45:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:58.713 23:45:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.713 23:45:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.713 ************************************ 00:11:58.713 START TEST xnvme_bdevperf 00:11:58.713 ************************************ 00:11:58.713 23:45:29 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:58.713 23:45:29 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:58.713 23:45:29 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:58.713 23:45:29 -- dd/common.sh@191 -- # return 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@60 -- # local io 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:58.713 23:45:29 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:58.714 23:45:29 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:58.714 23:45:29 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:58.714 23:45:29 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:58.714 23:45:29 -- dd/common.sh@31 -- # xtrace_disable 00:11:58.714 23:45:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.714 { 00:11:58.714 "subsystems": [ 00:11:58.714 { 00:11:58.714 "subsystem": "bdev", 00:11:58.714 "config": [ 00:11:58.714 { 00:11:58.714 "params": { 00:11:58.714 "io_mechanism": "libaio", 00:11:58.714 "filename": "/dev/nullb0", 00:11:58.714 "name": "null0" 00:11:58.714 }, 00:11:58.714 "method": "bdev_xnvme_create" 00:11:58.714 }, 00:11:58.714 { 00:11:58.714 "method": "bdev_wait_for_examine" 00:11:58.714 } 00:11:58.714 ] 00:11:58.714 } 00:11:58.714 ] 00:11:58.714 } 00:11:58.714 [2024-12-13 23:45:29.373309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:58.714 [2024-12-13 23:45:29.373427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67203 ] 00:11:58.973 [2024-12-13 23:45:29.524612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.973 [2024-12-13 23:45:29.673505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.231 Running I/O for 5 seconds... 00:12:04.496 00:12:04.496 Latency(us) 00:12:04.496 [2024-12-13T23:45:35.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.496 [2024-12-13T23:45:35.228Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:04.496 null0 : 5.00 209510.86 818.40 0.00 0.00 303.45 111.85 472.62 00:12:04.496 [2024-12-13T23:45:35.228Z] =================================================================================================================== 00:12:04.496 [2024-12-13T23:45:35.228Z] Total : 209510.86 818.40 0.00 0.00 303.45 111.85 472.62 00:12:05.062 23:45:35 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:05.062 23:45:35 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:05.062 23:45:35 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:05.062 23:45:35 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:05.062 23:45:35 -- dd/common.sh@31 -- # xtrace_disable 00:12:05.062 23:45:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.062 { 00:12:05.062 "subsystems": [ 00:12:05.062 { 00:12:05.062 "subsystem": "bdev", 00:12:05.062 "config": [ 00:12:05.062 { 00:12:05.062 "params": { 00:12:05.062 "io_mechanism": "io_uring", 00:12:05.062 "filename": "/dev/nullb0", 00:12:05.062 "name": "null0" 00:12:05.062 }, 00:12:05.062 "method": "bdev_xnvme_create" 00:12:05.062 }, 00:12:05.062 { 00:12:05.062 "method": "bdev_wait_for_examine" 00:12:05.062 } 00:12:05.062 ] 00:12:05.062 } 00:12:05.062 ] 00:12:05.062 } 00:12:05.062 [2024-12-13 23:45:35.569721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:05.062 [2024-12-13 23:45:35.569829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67277 ] 00:12:05.062 [2024-12-13 23:45:35.717233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.321 [2024-12-13 23:45:35.859174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.321 Running I/O for 5 seconds... 00:12:10.586 00:12:10.586 Latency(us) 00:12:10.586 [2024-12-13T23:45:41.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.586 [2024-12-13T23:45:41.318Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:10.586 null0 : 5.00 238019.46 929.76 0.00 0.00 266.82 155.96 337.13 00:12:10.586 [2024-12-13T23:45:41.319Z] =================================================================================================================== 00:12:10.587 [2024-12-13T23:45:41.319Z] Total : 238019.46 929.76 0.00 0.00 266.82 155.96 337.13 00:12:11.153 23:45:41 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:11.153 23:45:41 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:11.153 00:12:11.153 real 0m12.399s 00:12:11.153 user 0m9.985s 00:12:11.153 sys 0m2.182s 00:12:11.153 ************************************ 00:12:11.153 END TEST xnvme_bdevperf 00:12:11.153 ************************************ 00:12:11.153 23:45:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.153 23:45:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.153 ************************************ 00:12:11.153 END TEST nvme_xnvme 00:12:11.153 ************************************ 00:12:11.153 00:12:11.153 real 0m38.979s 00:12:11.153 user 0m33.315s 00:12:11.153 sys 0m4.863s 00:12:11.153 23:45:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.153 23:45:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.153 23:45:41 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:11.153 23:45:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:11.153 23:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.153 23:45:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.153 ************************************ 00:12:11.153 START TEST blockdev_xnvme 00:12:11.153 ************************************ 00:12:11.153 23:45:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:11.153 * Looking for test storage... 00:12:11.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:11.153 23:45:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:11.153 23:45:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:11.153 23:45:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:11.443 23:45:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:11.443 23:45:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:11.443 23:45:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:11.443 23:45:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:11.443 23:45:41 -- scripts/common.sh@335 -- # IFS=.-: 00:12:11.443 23:45:41 -- scripts/common.sh@335 -- # read -ra ver1 00:12:11.443 23:45:41 -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.443 23:45:41 -- scripts/common.sh@336 -- # read -ra ver2 00:12:11.443 23:45:41 -- scripts/common.sh@337 -- # local 'op=<' 00:12:11.443 23:45:41 -- scripts/common.sh@339 -- # ver1_l=2 00:12:11.443 23:45:41 -- scripts/common.sh@340 -- # ver2_l=1 00:12:11.443 23:45:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:11.443 23:45:41 -- scripts/common.sh@343 -- # case "$op" in 00:12:11.443 23:45:41 -- scripts/common.sh@344 -- # : 1 00:12:11.443 23:45:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:11.443 23:45:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.443 23:45:41 -- scripts/common.sh@364 -- # decimal 1 00:12:11.443 23:45:41 -- scripts/common.sh@352 -- # local d=1 00:12:11.443 23:45:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.443 23:45:41 -- scripts/common.sh@354 -- # echo 1 00:12:11.443 23:45:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:11.444 23:45:41 -- scripts/common.sh@365 -- # decimal 2 00:12:11.444 23:45:41 -- scripts/common.sh@352 -- # local d=2 00:12:11.444 23:45:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.444 23:45:41 -- scripts/common.sh@354 -- # echo 2 00:12:11.444 23:45:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:11.444 23:45:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:11.444 23:45:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:11.444 23:45:41 -- scripts/common.sh@367 -- # return 0 00:12:11.444 23:45:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.444 23:45:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.444 --rc genhtml_branch_coverage=1 00:12:11.444 --rc genhtml_function_coverage=1 00:12:11.444 --rc genhtml_legend=1 00:12:11.444 --rc geninfo_all_blocks=1 00:12:11.444 --rc geninfo_unexecuted_blocks=1 00:12:11.444 00:12:11.444 ' 00:12:11.444 23:45:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.444 --rc genhtml_branch_coverage=1 00:12:11.444 --rc genhtml_function_coverage=1 00:12:11.444 --rc genhtml_legend=1 00:12:11.444 --rc geninfo_all_blocks=1 00:12:11.444 --rc geninfo_unexecuted_blocks=1 00:12:11.444 00:12:11.444 ' 00:12:11.444 23:45:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.444 --rc genhtml_branch_coverage=1 00:12:11.444 --rc genhtml_function_coverage=1 00:12:11.444 --rc genhtml_legend=1 00:12:11.444 --rc geninfo_all_blocks=1 00:12:11.444 --rc geninfo_unexecuted_blocks=1 00:12:11.444 00:12:11.444 ' 00:12:11.444 23:45:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.444 --rc genhtml_branch_coverage=1 00:12:11.444 --rc genhtml_function_coverage=1 00:12:11.444 --rc genhtml_legend=1 00:12:11.444 --rc geninfo_all_blocks=1 00:12:11.444 --rc geninfo_unexecuted_blocks=1 00:12:11.444 00:12:11.444 ' 00:12:11.444 23:45:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:11.444 23:45:41 -- bdev/nbd_common.sh@6 -- # set -e 00:12:11.444 23:45:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:11.444 23:45:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:11.444 23:45:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:11.444 23:45:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:11.444 23:45:41 -- bdev/blockdev.sh@18 -- # : 00:12:11.444 23:45:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:11.444 23:45:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:11.444 23:45:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:11.444 23:45:41 -- bdev/blockdev.sh@672 -- # uname -s 00:12:11.444 23:45:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:11.444 23:45:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:11.444 23:45:41 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:11.444 23:45:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:11.444 23:45:41 -- bdev/blockdev.sh@682 -- # dek= 00:12:11.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.444 23:45:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:11.444 23:45:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:11.444 23:45:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:11.444 23:45:41 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:11.444 23:45:41 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:11.444 23:45:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:11.444 23:45:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67418 00:12:11.444 23:45:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:11.444 23:45:41 -- bdev/blockdev.sh@47 -- # waitforlisten 67418 00:12:11.444 23:45:41 -- common/autotest_common.sh@829 -- # '[' -z 67418 ']' 00:12:11.444 23:45:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.444 23:45:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.444 23:45:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.444 23:45:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.444 23:45:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.444 23:45:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:11.444 [2024-12-13 23:45:41.996522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.444 [2024-12-13 23:45:41.996610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67418 ] 00:12:11.444 [2024-12-13 23:45:42.139947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.739 [2024-12-13 23:45:42.342329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.739 [2024-12-13 23:45:42.342583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.122 23:45:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.122 23:45:43 -- common/autotest_common.sh@862 -- # return 0 00:12:13.122 23:45:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:13.122 23:45:43 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:13.122 23:45:43 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:13.122 23:45:43 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:13.122 23:45:43 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:13.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:13.382 Waiting for block devices as requested 00:12:13.382 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.382 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.642 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.642 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.905 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:18.905 23:45:49 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:18.905 23:45:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:18.905 23:45:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:18.905 23:45:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:18.905 23:45:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:18.905 23:45:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:18.905 23:45:49 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:18.905 23:45:49 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:18.905 23:45:49 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:18.905 23:45:49 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:18.905 23:45:49 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:18.905 23:45:49 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:18.905 23:45:49 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:18.905 23:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.905 23:45:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.905 23:45:49 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:18.905 nvme0n1 00:12:18.905 nvme1n1 00:12:18.905 nvme1n2 00:12:18.905 nvme1n3 00:12:18.905 nvme2n1 00:12:18.905 nvme3n1 00:12:18.905 23:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:18.905 23:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.905 23:45:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.905 23:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@738 -- # cat 00:12:18.905 23:45:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:18.905 23:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.905 23:45:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.905 23:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:18.905 23:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.905 23:45:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.905 23:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:18.905 23:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.905 23:45:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.905 23:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:18.905 23:45:49 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:18.905 23:45:49 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:18.905 23:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.905 23:45:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.905 23:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.905 23:45:49 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:18.906 23:45:49 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "32bcb412-bfbe-4bd4-8be0-d502d684af85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "32bcb412-bfbe-4bd4-8be0-d502d684af85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dc6bf602-ec31-4c30-a312-840cea28a7b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc6bf602-ec31-4c30-a312-840cea28a7b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "5b8c72c4-875a-4950-9cc7-80da6e7a8c27"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b8c72c4-875a-4950-9cc7-80da6e7a8c27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "cee50af2-f98b-443f-a0e8-615cf83269d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cee50af2-f98b-443f-a0e8-615cf83269d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4c52e80d-25c3-49a7-8c80-bf679d96930a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4c52e80d-25c3-49a7-8c80-bf679d96930a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "610979cd-b6a7-430a-8d6c-39f4d12e0ad8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "610979cd-b6a7-430a-8d6c-39f4d12e0ad8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:18.906 23:45:49 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:18.906 23:45:49 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:18.906 23:45:49 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:18.906 23:45:49 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:18.906 23:45:49 -- bdev/blockdev.sh@752 -- # killprocess 67418 00:12:18.906 23:45:49 -- common/autotest_common.sh@936 -- # '[' -z 67418 ']' 00:12:18.906 23:45:49 -- common/autotest_common.sh@940 -- # kill -0 67418 00:12:18.906 23:45:49 -- common/autotest_common.sh@941 -- # uname 00:12:18.906 23:45:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:18.906 23:45:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67418 00:12:18.906 killing process with pid 67418 00:12:18.906 23:45:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:18.906 23:45:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:18.906 23:45:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67418' 00:12:18.906 23:45:49 -- common/autotest_common.sh@955 -- # kill 67418 00:12:18.906 23:45:49 -- common/autotest_common.sh@960 -- # wait 67418 00:12:20.279 23:45:50 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:20.279 23:45:50 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:20.279 23:45:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:20.279 23:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.279 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.279 ************************************ 00:12:20.279 START TEST bdev_hello_world 00:12:20.279 ************************************ 00:12:20.279 23:45:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:20.279 [2024-12-13 23:45:50.815907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:20.279 [2024-12-13 23:45:50.816019] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67806 ] 00:12:20.279 [2024-12-13 23:45:50.965062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.537 [2024-12-13 23:45:51.114827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.796 [2024-12-13 23:45:51.395164] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:20.796 [2024-12-13 23:45:51.395202] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:20.796 [2024-12-13 23:45:51.395214] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:20.796 [2024-12-13 23:45:51.396610] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:20.796 [2024-12-13 23:45:51.397027] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:20.796 [2024-12-13 23:45:51.397048] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:20.796 [2024-12-13 23:45:51.397260] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:20.796 00:12:20.796 [2024-12-13 23:45:51.397275] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:21.364 00:12:21.364 ************************************ 00:12:21.364 END TEST bdev_hello_world 00:12:21.364 ************************************ 00:12:21.364 real 0m1.248s 00:12:21.364 user 0m0.976s 00:12:21.364 sys 0m0.161s 00:12:21.364 23:45:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.364 23:45:52 -- common/autotest_common.sh@10 -- # set +x 00:12:21.364 23:45:52 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:21.364 23:45:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:21.364 23:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.364 23:45:52 -- common/autotest_common.sh@10 -- # set +x 00:12:21.364 ************************************ 00:12:21.364 START TEST bdev_bounds 00:12:21.364 ************************************ 00:12:21.364 23:45:52 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:21.364 Process bdevio pid: 67837 00:12:21.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.364 23:45:52 -- bdev/blockdev.sh@288 -- # bdevio_pid=67837 00:12:21.364 23:45:52 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:21.364 23:45:52 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67837' 00:12:21.364 23:45:52 -- bdev/blockdev.sh@291 -- # waitforlisten 67837 00:12:21.364 23:45:52 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:21.364 23:45:52 -- common/autotest_common.sh@829 -- # '[' -z 67837 ']' 00:12:21.364 23:45:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.364 23:45:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.364 23:45:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.364 23:45:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.364 23:45:52 -- common/autotest_common.sh@10 -- # set +x 00:12:21.623 [2024-12-13 23:45:52.130850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.623 [2024-12-13 23:45:52.130978] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67837 ] 00:12:21.623 [2024-12-13 23:45:52.281924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:21.881 [2024-12-13 23:45:52.422121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.881 [2024-12-13 23:45:52.422401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.881 [2024-12-13 23:45:52.422428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.447 23:45:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.447 23:45:52 -- common/autotest_common.sh@862 -- # return 0 00:12:22.447 23:45:52 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:22.447 I/O targets: 00:12:22.447 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:22.447 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:22.447 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:22.447 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:22.447 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:22.447 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:22.447 00:12:22.447 00:12:22.447 CUnit - A unit testing framework for C - Version 2.1-3 00:12:22.447 http://cunit.sourceforge.net/ 00:12:22.447 00:12:22.447 00:12:22.447 Suite: bdevio tests on: nvme3n1 00:12:22.447 Test: blockdev write read block ...passed 00:12:22.447 Test: blockdev write zeroes read block ...passed 00:12:22.447 Test: blockdev write zeroes read no split ...passed 00:12:22.447 Test: blockdev write zeroes read split ...passed 00:12:22.447 Test: blockdev write zeroes read split partial ...passed 00:12:22.447 Test: blockdev reset ...passed 00:12:22.447 Test: blockdev write read 8 blocks ...passed 00:12:22.447 Test: blockdev write read size > 128k ...passed 00:12:22.447 Test: blockdev write read invalid size ...passed 00:12:22.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.447 Test: blockdev write read max offset ...passed 00:12:22.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.447 Test: blockdev writev readv 8 blocks ...passed 00:12:22.447 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.447 Test: blockdev writev readv block ...passed 00:12:22.447 Test: blockdev writev readv size > 128k ...passed 00:12:22.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.447 Test: blockdev comparev and writev ...passed 00:12:22.447 Test: blockdev nvme passthru rw ...passed 00:12:22.447 Test: blockdev nvme passthru vendor specific ...passed 00:12:22.447 Test: blockdev nvme admin passthru ...passed 00:12:22.447 Test: blockdev copy ...passed 00:12:22.447 Suite: bdevio tests on: nvme2n1 00:12:22.447 Test: blockdev write read block ...passed 00:12:22.447 Test: blockdev write zeroes read block ...passed 00:12:22.447 Test: blockdev write zeroes read no split ...passed 00:12:22.447 Test: blockdev write zeroes read split ...passed 00:12:22.447 Test: blockdev write zeroes read split partial ...passed 00:12:22.447 Test: blockdev reset ...passed 00:12:22.447 Test: blockdev write read 8 blocks ...passed 00:12:22.447 Test: blockdev write read size > 128k ...passed 00:12:22.447 Test: blockdev write read invalid size ...passed 00:12:22.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.447 Test: blockdev write read max offset ...passed 00:12:22.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.447 Test: blockdev writev readv 8 blocks ...passed 00:12:22.447 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.447 Test: blockdev writev readv block ...passed 00:12:22.447 Test: blockdev writev readv size > 128k ...passed 00:12:22.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.447 Test: blockdev comparev and writev ...passed 00:12:22.447 Test: blockdev nvme passthru rw ...passed 00:12:22.447 Test: blockdev nvme passthru vendor specific ...passed 00:12:22.447 Test: blockdev nvme admin passthru ...passed 00:12:22.447 Test: blockdev copy ...passed 00:12:22.447 Suite: bdevio tests on: nvme1n3 00:12:22.447 Test: blockdev write read block ...passed 00:12:22.447 Test: blockdev write zeroes read block ...passed 00:12:22.447 Test: blockdev write zeroes read no split ...passed 00:12:22.706 Test: blockdev write zeroes read split ...passed 00:12:22.706 Test: blockdev write zeroes read split partial ...passed 00:12:22.706 Test: blockdev reset ...passed 00:12:22.706 Test: blockdev write read 8 blocks ...passed 00:12:22.706 Test: blockdev write read size > 128k ...passed 00:12:22.706 Test: blockdev write read invalid size ...passed 00:12:22.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.706 Test: blockdev write read max offset ...passed 00:12:22.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.706 Test: blockdev writev readv 8 blocks ...passed 00:12:22.706 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.706 Test: blockdev writev readv block ...passed 00:12:22.706 Test: blockdev writev readv size > 128k ...passed 00:12:22.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.706 Test: blockdev comparev and writev ...passed 00:12:22.706 Test: blockdev nvme passthru rw ...passed 00:12:22.706 Test: blockdev nvme passthru vendor specific ...passed 00:12:22.706 Test: blockdev nvme admin passthru ...passed 00:12:22.706 Test: blockdev copy ...passed 00:12:22.706 Suite: bdevio tests on: nvme1n2 00:12:22.706 Test: blockdev write read block ...passed 00:12:22.706 Test: blockdev write zeroes read block ...passed 00:12:22.706 Test: blockdev write zeroes read no split ...passed 00:12:22.706 Test: blockdev write zeroes read split ...passed 00:12:22.706 Test: blockdev write zeroes read split partial ...passed 00:12:22.706 Test: blockdev reset ...passed 00:12:22.706 Test: blockdev write read 8 blocks ...passed 00:12:22.706 Test: blockdev write read size > 128k ...passed 00:12:22.706 Test: blockdev write read invalid size ...passed 00:12:22.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.706 Test: blockdev write read max offset ...passed 00:12:22.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.706 Test: blockdev writev readv 8 blocks ...passed 00:12:22.706 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.706 Test: blockdev writev readv block ...passed 00:12:22.706 Test: blockdev writev readv size > 128k ...passed 00:12:22.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.706 Test: blockdev comparev and writev ...passed 00:12:22.706 Test: blockdev nvme passthru rw ...passed 00:12:22.706 Test: blockdev nvme passthru vendor specific ...passed 00:12:22.706 Test: blockdev nvme admin passthru ...passed 00:12:22.706 Test: blockdev copy ...passed 00:12:22.706 Suite: bdevio tests on: nvme1n1 00:12:22.706 Test: blockdev write read block ...passed 00:12:22.706 Test: blockdev write zeroes read block ...passed 00:12:22.706 Test: blockdev write zeroes read no split ...passed 00:12:22.706 Test: blockdev write zeroes read split ...passed 00:12:22.706 Test: blockdev write zeroes read split partial ...passed 00:12:22.706 Test: blockdev reset ...passed 00:12:22.706 Test: blockdev write read 8 blocks ...passed 00:12:22.706 Test: blockdev write read size > 128k ...passed 00:12:22.706 Test: blockdev write read invalid size ...passed 00:12:22.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.706 Test: blockdev write read max offset ...passed 00:12:22.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.706 Test: blockdev writev readv 8 blocks ...passed 00:12:22.706 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.706 Test: blockdev writev readv block ...passed 00:12:22.706 Test: blockdev writev readv size > 128k ...passed 00:12:22.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.706 Test: blockdev comparev and writev ...passed 00:12:22.706 Test: blockdev nvme passthru rw ...passed 00:12:22.706 Test: blockdev nvme passthru vendor specific ...passed 00:12:22.706 Test: blockdev nvme admin passthru ...passed 00:12:22.706 Test: blockdev copy ...passed 00:12:22.706 Suite: bdevio tests on: nvme0n1 00:12:22.706 Test: blockdev write read block ...passed 00:12:22.706 Test: blockdev write zeroes read block ...passed 00:12:22.706 Test: blockdev write zeroes read no split ...passed 00:12:22.706 Test: blockdev write zeroes read split ...passed 00:12:22.706 Test: blockdev write zeroes read split partial ...passed 00:12:22.706 Test: blockdev reset ...passed 00:12:22.706 Test: blockdev write read 8 blocks ...passed 00:12:22.706 Test: blockdev write read size > 128k ...passed 00:12:22.706 Test: blockdev write read invalid size ...passed 00:12:22.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.706 Test: blockdev write read max offset ...passed 00:12:22.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.706 Test: blockdev writev readv 8 blocks ...passed 00:12:22.706 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.706 Test: blockdev writev readv block ...passed 00:12:22.706 Test: blockdev writev readv size > 128k ...passed 00:12:22.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.706 Test: blockdev comparev and writev ...passed 00:12:22.706 Test: blockdev nvme passthru rw ...passed 00:12:22.706 Test: blockdev nvme passthru vendor specific ...passed 00:12:22.706 Test: blockdev nvme admin passthru ...passed 00:12:22.706 Test: blockdev copy ...passed 00:12:22.706 00:12:22.706 Run Summary: Type Total Ran Passed Failed Inactive 00:12:22.706 suites 6 6 n/a 0 0 00:12:22.706 tests 138 138 138 0 0 00:12:22.706 asserts 780 780 780 0 n/a 00:12:22.706 00:12:22.706 Elapsed time = 0.882 seconds 00:12:22.706 0 00:12:22.706 23:45:53 -- bdev/blockdev.sh@293 -- # killprocess 67837 00:12:22.706 23:45:53 -- common/autotest_common.sh@936 -- # '[' -z 67837 ']' 00:12:22.706 23:45:53 -- common/autotest_common.sh@940 -- # kill -0 67837 00:12:22.706 23:45:53 -- common/autotest_common.sh@941 -- # uname 00:12:22.706 23:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:22.706 23:45:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67837 00:12:22.706 killing process with pid 67837 00:12:22.706 23:45:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:22.706 23:45:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:22.706 23:45:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67837' 00:12:22.706 23:45:53 -- common/autotest_common.sh@955 -- # kill 67837 00:12:22.706 23:45:53 -- common/autotest_common.sh@960 -- # wait 67837 00:12:23.640 23:45:54 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:23.640 00:12:23.640 real 0m1.972s 00:12:23.640 user 0m4.764s 00:12:23.640 sys 0m0.256s 00:12:23.640 23:45:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:23.640 ************************************ 00:12:23.640 END TEST bdev_bounds 00:12:23.640 ************************************ 00:12:23.641 23:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:23.641 23:45:54 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:23.641 23:45:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:23.641 23:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.641 23:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:23.641 ************************************ 00:12:23.641 START TEST bdev_nbd 00:12:23.641 ************************************ 00:12:23.641 23:45:54 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:23.641 23:45:54 -- bdev/blockdev.sh@298 -- # uname -s 00:12:23.641 23:45:54 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:23.641 23:45:54 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.641 23:45:54 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:23.641 23:45:54 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:23.641 23:45:54 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:23.641 23:45:54 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:23.641 23:45:54 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:23.641 23:45:54 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:23.641 23:45:54 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:23.641 23:45:54 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:23.641 23:45:54 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:23.641 23:45:54 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:23.641 23:45:54 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:23.641 23:45:54 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:23.641 23:45:54 -- bdev/blockdev.sh@316 -- # nbd_pid=67893 00:12:23.641 23:45:54 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:23.641 23:45:54 -- bdev/blockdev.sh@318 -- # waitforlisten 67893 /var/tmp/spdk-nbd.sock 00:12:23.641 23:45:54 -- common/autotest_common.sh@829 -- # '[' -z 67893 ']' 00:12:23.641 23:45:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:23.641 23:45:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.641 23:45:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:23.641 23:45:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.641 23:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:23.641 23:45:54 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:23.641 [2024-12-13 23:45:54.176378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:23.641 [2024-12-13 23:45:54.176694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.641 [2024-12-13 23:45:54.326946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.899 [2024-12-13 23:45:54.476927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.466 23:45:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.466 23:45:54 -- common/autotest_common.sh@862 -- # return 0 00:12:24.466 23:45:54 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@24 -- # local i 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:24.466 23:45:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:24.723 23:45:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:24.723 23:45:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:24.723 23:45:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:24.723 23:45:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:24.723 23:45:55 -- common/autotest_common.sh@867 -- # local i 00:12:24.724 23:45:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:24.724 23:45:55 -- common/autotest_common.sh@871 -- # break 00:12:24.724 23:45:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.724 1+0 records in 00:12:24.724 1+0 records out 00:12:24.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741534 s, 5.5 MB/s 00:12:24.724 23:45:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.724 23:45:55 -- common/autotest_common.sh@884 -- # size=4096 00:12:24.724 23:45:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.724 23:45:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.724 23:45:55 -- common/autotest_common.sh@887 -- # return 0 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:24.724 23:45:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:24.724 23:45:55 -- common/autotest_common.sh@867 -- # local i 00:12:24.724 23:45:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:24.724 23:45:55 -- common/autotest_common.sh@871 -- # break 00:12:24.724 23:45:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.724 23:45:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.724 1+0 records in 00:12:24.724 1+0 records out 00:12:24.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488199 s, 8.4 MB/s 00:12:24.724 23:45:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.724 23:45:55 -- common/autotest_common.sh@884 -- # size=4096 00:12:24.724 23:45:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.724 23:45:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.724 23:45:55 -- common/autotest_common.sh@887 -- # return 0 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:24.724 23:45:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:24.984 23:45:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:24.984 23:45:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:24.984 23:45:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:24.984 23:45:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:24.984 23:45:55 -- common/autotest_common.sh@867 -- # local i 00:12:24.984 23:45:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.984 23:45:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.984 23:45:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:24.984 23:45:55 -- common/autotest_common.sh@871 -- # break 00:12:24.984 23:45:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.984 23:45:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.984 23:45:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.984 1+0 records in 00:12:24.984 1+0 records out 00:12:24.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626804 s, 6.5 MB/s 00:12:24.984 23:45:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.984 23:45:55 -- common/autotest_common.sh@884 -- # size=4096 00:12:24.984 23:45:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.984 23:45:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.984 23:45:55 -- common/autotest_common.sh@887 -- # return 0 00:12:24.984 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:24.984 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:24.984 23:45:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:25.246 23:45:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:25.246 23:45:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:25.246 23:45:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:25.246 23:45:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:25.246 23:45:55 -- common/autotest_common.sh@867 -- # local i 00:12:25.246 23:45:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.246 23:45:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.246 23:45:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:25.246 23:45:55 -- common/autotest_common.sh@871 -- # break 00:12:25.246 23:45:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.246 23:45:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.246 23:45:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.246 1+0 records in 00:12:25.246 1+0 records out 00:12:25.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115058 s, 3.6 MB/s 00:12:25.246 23:45:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.246 23:45:55 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.246 23:45:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.246 23:45:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.246 23:45:55 -- common/autotest_common.sh@887 -- # return 0 00:12:25.246 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:25.246 23:45:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:25.246 23:45:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:25.508 23:45:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:25.508 23:45:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:25.508 23:45:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:25.508 23:45:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:25.508 23:45:56 -- common/autotest_common.sh@867 -- # local i 00:12:25.508 23:45:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.508 23:45:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.508 23:45:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:25.508 23:45:56 -- common/autotest_common.sh@871 -- # break 00:12:25.508 23:45:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.508 23:45:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.508 23:45:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.508 1+0 records in 00:12:25.508 1+0 records out 00:12:25.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109666 s, 3.7 MB/s 00:12:25.508 23:45:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.508 23:45:56 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.508 23:45:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.508 23:45:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.508 23:45:56 -- common/autotest_common.sh@887 -- # return 0 00:12:25.508 23:45:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:25.508 23:45:56 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:25.508 23:45:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:25.769 23:45:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:25.769 23:45:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:25.769 23:45:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:25.769 23:45:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:25.769 23:45:56 -- common/autotest_common.sh@867 -- # local i 00:12:25.769 23:45:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.769 23:45:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.769 23:45:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:25.769 23:45:56 -- common/autotest_common.sh@871 -- # break 00:12:25.769 23:45:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.769 23:45:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.769 23:45:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.769 1+0 records in 00:12:25.769 1+0 records out 00:12:25.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012048 s, 3.4 MB/s 00:12:25.769 23:45:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.769 23:45:56 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.769 23:45:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.769 23:45:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.769 23:45:56 -- common/autotest_common.sh@887 -- # return 0 00:12:25.769 23:45:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:25.769 23:45:56 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:25.769 23:45:56 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd0", 00:12:26.030 "bdev_name": "nvme0n1" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd1", 00:12:26.030 "bdev_name": "nvme1n1" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd2", 00:12:26.030 "bdev_name": "nvme1n2" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd3", 00:12:26.030 "bdev_name": "nvme1n3" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd4", 00:12:26.030 "bdev_name": "nvme2n1" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd5", 00:12:26.030 "bdev_name": "nvme3n1" 00:12:26.030 } 00:12:26.030 ]' 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd0", 00:12:26.030 "bdev_name": "nvme0n1" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd1", 00:12:26.030 "bdev_name": "nvme1n1" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd2", 00:12:26.030 "bdev_name": "nvme1n2" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd3", 00:12:26.030 "bdev_name": "nvme1n3" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd4", 00:12:26.030 "bdev_name": "nvme2n1" 00:12:26.030 }, 00:12:26.030 { 00:12:26.030 "nbd_device": "/dev/nbd5", 00:12:26.030 "bdev_name": "nvme3n1" 00:12:26.030 } 00:12:26.030 ]' 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@51 -- # local i 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.030 23:45:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@41 -- # break 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.292 23:45:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@41 -- # break 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.292 23:45:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@41 -- # break 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.553 23:45:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@41 -- # break 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.814 23:45:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@41 -- # break 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.075 23:45:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@41 -- # break 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.335 23:45:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:27.594 23:45:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:27.594 23:45:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@65 -- # true 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@65 -- # count=0 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@122 -- # count=0 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@127 -- # return 0 00:12:27.595 23:45:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@12 -- # local i 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:27.595 /dev/nbd0 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:27.595 23:45:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:27.595 23:45:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:27.595 23:45:58 -- common/autotest_common.sh@867 -- # local i 00:12:27.595 23:45:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.595 23:45:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.595 23:45:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:27.853 23:45:58 -- common/autotest_common.sh@871 -- # break 00:12:27.853 23:45:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.853 23:45:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.853 23:45:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.853 1+0 records in 00:12:27.853 1+0 records out 00:12:27.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612237 s, 6.7 MB/s 00:12:27.853 23:45:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.853 23:45:58 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.853 23:45:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.853 23:45:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.853 23:45:58 -- common/autotest_common.sh@887 -- # return 0 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:27.853 /dev/nbd1 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:27.853 23:45:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:27.853 23:45:58 -- common/autotest_common.sh@867 -- # local i 00:12:27.853 23:45:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.853 23:45:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.853 23:45:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:27.853 23:45:58 -- common/autotest_common.sh@871 -- # break 00:12:27.853 23:45:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.853 23:45:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.853 23:45:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.853 1+0 records in 00:12:27.853 1+0 records out 00:12:27.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331739 s, 12.3 MB/s 00:12:27.853 23:45:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.853 23:45:58 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.853 23:45:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.853 23:45:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.853 23:45:58 -- common/autotest_common.sh@887 -- # return 0 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:27.853 23:45:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:28.111 /dev/nbd10 00:12:28.111 23:45:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:28.111 23:45:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:28.111 23:45:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:28.111 23:45:58 -- common/autotest_common.sh@867 -- # local i 00:12:28.111 23:45:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.111 23:45:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.111 23:45:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:28.111 23:45:58 -- common/autotest_common.sh@871 -- # break 00:12:28.112 23:45:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.112 23:45:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.112 23:45:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.112 1+0 records in 00:12:28.112 1+0 records out 00:12:28.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624672 s, 6.6 MB/s 00:12:28.112 23:45:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.112 23:45:58 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.112 23:45:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.112 23:45:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.112 23:45:58 -- common/autotest_common.sh@887 -- # return 0 00:12:28.112 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.112 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.112 23:45:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:28.370 /dev/nbd11 00:12:28.370 23:45:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:28.370 23:45:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:28.370 23:45:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:28.370 23:45:58 -- common/autotest_common.sh@867 -- # local i 00:12:28.370 23:45:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.370 23:45:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.370 23:45:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:28.370 23:45:58 -- common/autotest_common.sh@871 -- # break 00:12:28.370 23:45:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.370 23:45:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.370 23:45:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.370 1+0 records in 00:12:28.370 1+0 records out 00:12:28.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038757 s, 10.6 MB/s 00:12:28.370 23:45:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.370 23:45:58 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.370 23:45:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.370 23:45:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.370 23:45:58 -- common/autotest_common.sh@887 -- # return 0 00:12:28.370 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.370 23:45:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.370 23:45:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:28.629 /dev/nbd12 00:12:28.629 23:45:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:28.629 23:45:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:28.629 23:45:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:28.629 23:45:59 -- common/autotest_common.sh@867 -- # local i 00:12:28.629 23:45:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.629 23:45:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.629 23:45:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:28.629 23:45:59 -- common/autotest_common.sh@871 -- # break 00:12:28.629 23:45:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.629 23:45:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.629 23:45:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.629 1+0 records in 00:12:28.629 1+0 records out 00:12:28.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558771 s, 7.3 MB/s 00:12:28.629 23:45:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.629 23:45:59 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.629 23:45:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.629 23:45:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.629 23:45:59 -- common/autotest_common.sh@887 -- # return 0 00:12:28.629 23:45:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.629 23:45:59 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.629 23:45:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:28.887 /dev/nbd13 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:28.887 23:45:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:28.887 23:45:59 -- common/autotest_common.sh@867 -- # local i 00:12:28.887 23:45:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.887 23:45:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.887 23:45:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:28.887 23:45:59 -- common/autotest_common.sh@871 -- # break 00:12:28.887 23:45:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.887 23:45:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.887 23:45:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.887 1+0 records in 00:12:28.887 1+0 records out 00:12:28.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257955 s, 15.9 MB/s 00:12:28.887 23:45:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.887 23:45:59 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.887 23:45:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.887 23:45:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.887 23:45:59 -- common/autotest_common.sh@887 -- # return 0 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:28.887 { 00:12:28.887 "nbd_device": "/dev/nbd0", 00:12:28.887 "bdev_name": "nvme0n1" 00:12:28.887 }, 00:12:28.887 { 00:12:28.887 "nbd_device": "/dev/nbd1", 00:12:28.887 "bdev_name": "nvme1n1" 00:12:28.887 }, 00:12:28.887 { 00:12:28.887 "nbd_device": "/dev/nbd10", 00:12:28.887 "bdev_name": "nvme1n2" 00:12:28.887 }, 00:12:28.887 { 00:12:28.887 "nbd_device": "/dev/nbd11", 00:12:28.887 "bdev_name": "nvme1n3" 00:12:28.887 }, 00:12:28.887 { 00:12:28.887 "nbd_device": "/dev/nbd12", 00:12:28.887 "bdev_name": "nvme2n1" 00:12:28.887 }, 00:12:28.887 { 00:12:28.887 "nbd_device": "/dev/nbd13", 00:12:28.887 "bdev_name": "nvme3n1" 00:12:28.887 } 00:12:28.887 ]' 00:12:28.887 23:45:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:28.888 23:45:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:28.888 { 00:12:28.888 "nbd_device": "/dev/nbd0", 00:12:28.888 "bdev_name": "nvme0n1" 00:12:28.888 }, 00:12:28.888 { 00:12:28.888 "nbd_device": "/dev/nbd1", 00:12:28.888 "bdev_name": "nvme1n1" 00:12:28.888 }, 00:12:28.888 { 00:12:28.888 "nbd_device": "/dev/nbd10", 00:12:28.888 "bdev_name": "nvme1n2" 00:12:28.888 }, 00:12:28.888 { 00:12:28.888 "nbd_device": "/dev/nbd11", 00:12:28.888 "bdev_name": "nvme1n3" 00:12:28.888 }, 00:12:28.888 { 00:12:28.888 "nbd_device": "/dev/nbd12", 00:12:28.888 "bdev_name": "nvme2n1" 00:12:28.888 }, 00:12:28.888 { 00:12:28.888 "nbd_device": "/dev/nbd13", 00:12:28.888 "bdev_name": "nvme3n1" 00:12:28.888 } 00:12:28.888 ]' 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:29.148 /dev/nbd1 00:12:29.148 /dev/nbd10 00:12:29.148 /dev/nbd11 00:12:29.148 /dev/nbd12 00:12:29.148 /dev/nbd13' 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:29.148 /dev/nbd1 00:12:29.148 /dev/nbd10 00:12:29.148 /dev/nbd11 00:12:29.148 /dev/nbd12 00:12:29.148 /dev/nbd13' 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@65 -- # count=6 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@95 -- # count=6 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:29.148 256+0 records in 00:12:29.148 256+0 records out 00:12:29.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574899 s, 182 MB/s 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:29.148 256+0 records in 00:12:29.148 256+0 records out 00:12:29.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107555 s, 9.7 MB/s 00:12:29.148 23:45:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.149 23:45:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:29.410 256+0 records in 00:12:29.410 256+0 records out 00:12:29.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218968 s, 4.8 MB/s 00:12:29.410 23:45:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.410 23:45:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:29.671 256+0 records in 00:12:29.671 256+0 records out 00:12:29.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246162 s, 4.3 MB/s 00:12:29.671 23:46:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.671 23:46:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:29.932 256+0 records in 00:12:29.932 256+0 records out 00:12:29.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252691 s, 4.1 MB/s 00:12:29.932 23:46:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.932 23:46:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:30.192 256+0 records in 00:12:30.192 256+0 records out 00:12:30.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265728 s, 3.9 MB/s 00:12:30.192 23:46:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.192 23:46:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:30.451 256+0 records in 00:12:30.451 256+0 records out 00:12:30.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238649 s, 4.4 MB/s 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@51 -- # local i 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.451 23:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@41 -- # break 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.709 23:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@41 -- # break 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@41 -- # break 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.966 23:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@41 -- # break 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.224 23:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@41 -- # break 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.492 23:46:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@41 -- # break 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@65 -- # true 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@65 -- # count=0 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@104 -- # count=0 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@109 -- # return 0 00:12:31.768 23:46:02 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:31.768 23:46:02 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:32.026 malloc_lvol_verify 00:12:32.026 23:46:02 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:32.284 3d8f2efd-16a2-42b5-bec4-688e27e8d381 00:12:32.284 23:46:02 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:32.542 45c5268c-31b4-4364-b1f6-75bb6590554a 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:32.542 /dev/nbd0 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:32.542 mke2fs 1.47.0 (5-Feb-2023) 00:12:32.542 Discarding device blocks: 0/4096 done 00:12:32.542 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:32.542 00:12:32.542 Allocating group tables: 0/1 done 00:12:32.542 Writing inode tables: 0/1 done 00:12:32.542 Creating journal (1024 blocks): done 00:12:32.542 Writing superblocks and filesystem accounting information: 0/1 done 00:12:32.542 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@51 -- # local i 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.542 23:46:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@41 -- # break 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:32.800 23:46:03 -- bdev/nbd_common.sh@147 -- # return 0 00:12:32.800 23:46:03 -- bdev/blockdev.sh@324 -- # killprocess 67893 00:12:32.800 23:46:03 -- common/autotest_common.sh@936 -- # '[' -z 67893 ']' 00:12:32.800 23:46:03 -- common/autotest_common.sh@940 -- # kill -0 67893 00:12:32.800 23:46:03 -- common/autotest_common.sh@941 -- # uname 00:12:32.800 23:46:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:32.800 23:46:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67893 00:12:32.800 23:46:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:32.800 23:46:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:32.800 killing process with pid 67893 00:12:32.800 23:46:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67893' 00:12:32.800 23:46:03 -- common/autotest_common.sh@955 -- # kill 67893 00:12:32.800 23:46:03 -- common/autotest_common.sh@960 -- # wait 67893 00:12:33.736 23:46:04 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:33.736 00:12:33.736 real 0m10.028s 00:12:33.736 user 0m13.536s 00:12:33.736 sys 0m3.384s 00:12:33.736 23:46:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:33.736 ************************************ 00:12:33.736 END TEST bdev_nbd 00:12:33.736 ************************************ 00:12:33.736 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:12:33.736 23:46:04 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:33.736 23:46:04 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.736 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:12:33.736 ************************************ 00:12:33.736 START TEST bdev_fio 00:12:33.736 ************************************ 00:12:33.736 23:46:04 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@329 -- # local env_context 00:12:33.736 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:33.736 23:46:04 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:33.736 23:46:04 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:33.736 23:46:04 -- bdev/blockdev.sh@337 -- # echo '' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:33.736 23:46:04 -- bdev/blockdev.sh@337 -- # env_context= 00:12:33.736 23:46:04 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:33.736 23:46:04 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:33.736 23:46:04 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:33.736 23:46:04 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:33.736 23:46:04 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:33.736 23:46:04 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:33.736 23:46:04 -- common/autotest_common.sh@1290 -- # cat 00:12:33.736 23:46:04 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1303 -- # cat 00:12:33.736 23:46:04 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:33.736 23:46:04 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:33.736 23:46:04 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:33.736 23:46:04 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:33.736 23:46:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.736 23:46:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:33.736 23:46:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.736 23:46:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:33.736 23:46:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.736 23:46:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:33.736 23:46:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.736 23:46:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:33.736 23:46:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.736 23:46:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:33.736 23:46:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.736 23:46:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:33.736 23:46:04 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:33.736 23:46:04 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.736 23:46:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:33.737 23:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.737 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:12:33.737 ************************************ 00:12:33.737 START TEST bdev_fio_rw_verify 00:12:33.737 ************************************ 00:12:33.737 23:46:04 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.737 23:46:04 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.737 23:46:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:33.737 23:46:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.737 23:46:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:33.737 23:46:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.737 23:46:04 -- common/autotest_common.sh@1330 -- # shift 00:12:33.737 23:46:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:33.737 23:46:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.737 23:46:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.737 23:46:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:33.737 23:46:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:33.737 23:46:04 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:33.737 23:46:04 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:33.737 23:46:04 -- common/autotest_common.sh@1336 -- # break 00:12:33.737 23:46:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:33.737 23:46:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.737 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:33.737 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:33.737 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:33.737 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:33.737 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:33.737 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:33.737 fio-3.35 00:12:33.737 Starting 6 threads 00:12:45.966 00:12:45.966 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68292: Fri Dec 13 23:46:15 2024 00:12:45.966 read: IOPS=14.5k, BW=56.5MiB/s (59.3MB/s)(565MiB/10002msec) 00:12:45.966 slat (usec): min=2, max=1402, avg= 6.91, stdev=15.08 00:12:45.966 clat (usec): min=89, max=770418, avg=1347.31, stdev=5780.15 00:12:45.966 lat (usec): min=93, max=770425, avg=1354.22, stdev=5780.25 00:12:45.966 clat percentiles (usec): 00:12:45.966 | 50.000th=[ 1205], 99.000th=[ 3720], 99.900th=[ 5276], 00:12:45.966 | 99.990th=[ 44303], 99.999th=[767558] 00:12:45.966 write: IOPS=14.9k, BW=58.3MiB/s (61.1MB/s)(583MiB/10002msec); 0 zone resets 00:12:45.966 slat (usec): min=12, max=4343, avg=43.82, stdev=146.06 00:12:45.966 clat (usec): min=94, max=8369, avg=1575.64, stdev=858.52 00:12:45.966 lat (usec): min=109, max=8749, avg=1619.47, stdev=871.61 00:12:45.966 clat percentiles (usec): 00:12:45.966 | 50.000th=[ 1434], 99.000th=[ 4228], 99.900th=[ 5604], 99.990th=[ 7767], 00:12:45.966 | 99.999th=[ 8356] 00:12:45.966 bw ( KiB/s): min=44453, max=96575, per=100.00%, avg=60500.06, stdev=2395.16, samples=113 00:12:45.966 iops : min=11110, max=24140, avg=15123.70, stdev=598.72, samples=113 00:12:45.966 lat (usec) : 100=0.01%, 250=2.81%, 500=7.92%, 750=10.24%, 1000=12.05% 00:12:45.966 lat (msec) : 2=45.45%, 4=20.47%, 10=1.06%, 50=0.01%, 1000=0.01% 00:12:45.966 cpu : usr=43.96%, sys=32.01%, ctx=5941, majf=0, minf=16566 00:12:45.966 IO depths : 1=11.1%, 2=23.5%, 4=51.4%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.966 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.966 issued rwts: total=144739,149153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.966 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:45.966 00:12:45.966 Run status group 0 (all jobs): 00:12:45.966 READ: bw=56.5MiB/s (59.3MB/s), 56.5MiB/s-56.5MiB/s (59.3MB/s-59.3MB/s), io=565MiB (593MB), run=10002-10002msec 00:12:45.966 WRITE: bw=58.3MiB/s (61.1MB/s), 58.3MiB/s-58.3MiB/s (61.1MB/s-61.1MB/s), io=583MiB (611MB), run=10002-10002msec 00:12:45.966 ----------------------------------------------------- 00:12:45.966 Suppressions used: 00:12:45.966 count bytes template 00:12:45.966 6 48 /usr/src/fio/parse.c 00:12:45.966 4332 415872 /usr/src/fio/iolog.c 00:12:45.966 1 8 libtcmalloc_minimal.so 00:12:45.966 1 904 libcrypto.so 00:12:45.966 ----------------------------------------------------- 00:12:45.966 00:12:45.966 00:12:45.966 real 0m11.878s 00:12:45.966 user 0m27.940s 00:12:45.966 sys 0m19.562s 00:12:45.966 ************************************ 00:12:45.966 END TEST bdev_fio_rw_verify 00:12:45.966 ************************************ 00:12:45.966 23:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.966 23:46:16 -- common/autotest_common.sh@10 -- # set +x 00:12:45.966 23:46:16 -- bdev/blockdev.sh@348 -- # rm -f 00:12:45.966 23:46:16 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:45.966 23:46:16 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:45.966 23:46:16 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:45.966 23:46:16 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:45.966 23:46:16 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:45.966 23:46:16 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:45.966 23:46:16 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:45.966 23:46:16 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:45.966 23:46:16 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:45.966 23:46:16 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:45.966 23:46:16 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:45.966 23:46:16 -- common/autotest_common.sh@1290 -- # cat 00:12:45.966 23:46:16 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:45.966 23:46:16 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:45.966 23:46:16 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:45.966 23:46:16 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:45.966 23:46:16 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "32bcb412-bfbe-4bd4-8be0-d502d684af85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "32bcb412-bfbe-4bd4-8be0-d502d684af85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dc6bf602-ec31-4c30-a312-840cea28a7b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc6bf602-ec31-4c30-a312-840cea28a7b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "5b8c72c4-875a-4950-9cc7-80da6e7a8c27"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b8c72c4-875a-4950-9cc7-80da6e7a8c27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "cee50af2-f98b-443f-a0e8-615cf83269d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cee50af2-f98b-443f-a0e8-615cf83269d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4c52e80d-25c3-49a7-8c80-bf679d96930a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4c52e80d-25c3-49a7-8c80-bf679d96930a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "610979cd-b6a7-430a-8d6c-39f4d12e0ad8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "610979cd-b6a7-430a-8d6c-39f4d12e0ad8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:45.966 23:46:16 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:45.966 23:46:16 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:45.966 /home/vagrant/spdk_repo/spdk 00:12:45.966 23:46:16 -- bdev/blockdev.sh@360 -- # popd 00:12:45.966 23:46:16 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:45.966 23:46:16 -- bdev/blockdev.sh@362 -- # return 0 00:12:45.966 00:12:45.966 real 0m12.055s 00:12:45.966 user 0m28.021s 00:12:45.966 sys 0m19.636s 00:12:45.966 23:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.966 ************************************ 00:12:45.967 END TEST bdev_fio 00:12:45.967 ************************************ 00:12:45.967 23:46:16 -- common/autotest_common.sh@10 -- # set +x 00:12:45.967 23:46:16 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:45.967 23:46:16 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:45.967 23:46:16 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:45.967 23:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.967 23:46:16 -- common/autotest_common.sh@10 -- # set +x 00:12:45.967 ************************************ 00:12:45.967 START TEST bdev_verify 00:12:45.967 ************************************ 00:12:45.967 23:46:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:45.967 [2024-12-13 23:46:16.390504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:45.967 [2024-12-13 23:46:16.390639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68464 ] 00:12:45.967 [2024-12-13 23:46:16.545054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:46.228 [2024-12-13 23:46:16.812943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.228 [2024-12-13 23:46:16.813062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.800 Running I/O for 5 seconds... 00:12:52.099 00:12:52.099 Latency(us) 00:12:52.099 [2024-12-13T23:46:22.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x0 length 0x20000 00:12:52.099 nvme0n1 : 5.09 2270.92 8.87 0.00 0.00 55859.33 18249.26 74610.22 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x20000 length 0x20000 00:12:52.099 nvme0n1 : 5.08 2138.98 8.36 0.00 0.00 59584.46 18854.20 77433.30 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x0 length 0x80000 00:12:52.099 nvme1n1 : 5.08 2200.99 8.60 0.00 0.00 57743.00 7259.37 77433.30 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x80000 length 0x80000 00:12:52.099 nvme1n1 : 5.09 2245.73 8.77 0.00 0.00 56758.02 3478.45 66947.54 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x0 length 0x80000 00:12:52.099 nvme1n2 : 5.10 2235.38 8.73 0.00 0.00 56951.12 10989.88 69367.34 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x80000 length 0x80000 00:12:52.099 nvme1n2 : 5.09 2282.48 8.92 0.00 0.00 55835.46 6251.13 71383.83 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x0 length 0x80000 00:12:52.099 nvme1n3 : 5.09 2136.82 8.35 0.00 0.00 59481.31 5520.15 79853.10 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x80000 length 0x80000 00:12:52.099 nvme1n3 : 5.10 2281.08 8.91 0.00 0.00 55779.16 3453.24 76626.71 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x0 length 0xbd0bd 00:12:52.099 nvme2n1 : 5.10 2066.56 8.07 0.00 0.00 61344.20 2835.69 121796.14 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:52.099 nvme2n1 : 5.10 1906.96 7.45 0.00 0.00 66463.68 2356.78 120182.94 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0x0 length 0xa0000 00:12:52.099 nvme3n1 : 5.09 2255.11 8.81 0.00 0.00 56152.09 10485.76 79853.10 00:12:52.099 [2024-12-13T23:46:22.831Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:52.099 Verification LBA range: start 0xa0000 length 0xa0000 00:12:52.099 nvme3n1 : 5.10 2006.82 7.84 0.00 0.00 63165.06 8620.50 76223.41 00:12:52.099 [2024-12-13T23:46:22.831Z] =================================================================================================================== 00:12:52.099 [2024-12-13T23:46:22.831Z] Total : 26027.83 101.67 0.00 0.00 58587.78 2356.78 121796.14 00:12:52.669 00:12:52.669 real 0m6.981s 00:12:52.669 user 0m8.729s 00:12:52.669 sys 0m3.221s 00:12:52.669 ************************************ 00:12:52.669 END TEST bdev_verify 00:12:52.669 ************************************ 00:12:52.669 23:46:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.669 23:46:23 -- common/autotest_common.sh@10 -- # set +x 00:12:52.669 23:46:23 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:52.669 23:46:23 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:52.669 23:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.669 23:46:23 -- common/autotest_common.sh@10 -- # set +x 00:12:52.669 ************************************ 00:12:52.669 START TEST bdev_verify_big_io 00:12:52.669 ************************************ 00:12:52.669 23:46:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:52.929 [2024-12-13 23:46:23.439760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.930 [2024-12-13 23:46:23.439900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68569 ] 00:12:52.930 [2024-12-13 23:46:23.592248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:53.191 [2024-12-13 23:46:23.814635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.191 [2024-12-13 23:46:23.814759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.763 Running I/O for 5 seconds... 00:13:00.352 00:13:00.352 Latency(us) 00:13:00.352 [2024-12-13T23:46:31.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x0 length 0x2000 00:13:00.352 nvme0n1 : 5.49 269.17 16.82 0.00 0.00 465430.16 103244.41 512995.64 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x2000 length 0x2000 00:13:00.352 nvme0n1 : 5.57 298.11 18.63 0.00 0.00 423962.65 35893.56 774333.05 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x0 length 0x8000 00:13:00.352 nvme1n1 : 5.49 269.03 16.81 0.00 0.00 460247.45 98404.82 561391.46 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x8000 length 0x8000 00:13:00.352 nvme1n1 : 5.56 265.92 16.62 0.00 0.00 460236.81 80659.69 522674.81 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x0 length 0x8000 00:13:00.352 nvme1n2 : 5.50 253.09 15.82 0.00 0.00 482595.03 75416.81 529127.58 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x8000 length 0x8000 00:13:00.352 nvme1n2 : 5.56 250.16 15.64 0.00 0.00 482561.40 76626.71 538806.74 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x0 length 0x8000 00:13:00.352 nvme1n3 : 5.48 236.75 14.80 0.00 0.00 509282.31 72593.72 590428.95 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x8000 length 0x8000 00:13:00.352 nvme1n3 : 5.56 250.78 15.67 0.00 0.00 474431.35 60898.07 635598.38 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x0 length 0xbd0b 00:13:00.352 nvme2n1 : 5.50 290.28 18.14 0.00 0.00 408449.11 74610.22 561391.46 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:00.352 nvme2n1 : 5.57 328.02 20.50 0.00 0.00 355448.99 29440.79 500090.09 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0x0 length 0xa000 00:13:00.352 nvme3n1 : 5.50 317.73 19.86 0.00 0.00 372270.16 12300.60 538806.74 00:13:00.352 [2024-12-13T23:46:31.084Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.352 Verification LBA range: start 0xa000 length 0xa000 00:13:00.352 nvme3n1 : 5.58 265.42 16.59 0.00 0.00 429784.65 2923.91 609787.27 00:13:00.352 [2024-12-13T23:46:31.084Z] =================================================================================================================== 00:13:00.352 [2024-12-13T23:46:31.084Z] Total : 3294.46 205.90 0.00 0.00 439386.25 2923.91 774333.05 00:13:00.613 00:13:00.613 real 0m7.744s 00:13:00.613 user 0m13.629s 00:13:00.613 sys 0m0.643s 00:13:00.613 23:46:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.613 ************************************ 00:13:00.613 END TEST bdev_verify_big_io 00:13:00.613 ************************************ 00:13:00.613 23:46:31 -- common/autotest_common.sh@10 -- # set +x 00:13:00.613 23:46:31 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.613 23:46:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:00.613 23:46:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.613 23:46:31 -- common/autotest_common.sh@10 -- # set +x 00:13:00.613 ************************************ 00:13:00.613 START TEST bdev_write_zeroes 00:13:00.613 ************************************ 00:13:00.613 23:46:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.613 [2024-12-13 23:46:31.271369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.613 [2024-12-13 23:46:31.271552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68673 ] 00:13:00.875 [2024-12-13 23:46:31.423748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.136 [2024-12-13 23:46:31.698382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.708 Running I/O for 1 seconds... 00:13:02.652 00:13:02.652 Latency(us) 00:13:02.652 [2024-12-13T23:46:33.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.652 [2024-12-13T23:46:33.384Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.652 nvme0n1 : 1.02 11710.88 45.75 0.00 0.00 10919.04 8217.21 22786.36 00:13:02.652 [2024-12-13T23:46:33.384Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.652 nvme1n1 : 1.01 11696.32 45.69 0.00 0.00 10920.85 8267.62 24601.21 00:13:02.652 [2024-12-13T23:46:33.384Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.652 nvme1n2 : 1.01 11682.47 45.63 0.00 0.00 10924.37 8217.21 24601.21 00:13:02.652 [2024-12-13T23:46:33.384Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.652 nvme1n3 : 1.01 11669.00 45.58 0.00 0.00 10927.44 8217.21 24399.56 00:13:02.652 [2024-12-13T23:46:33.384Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.652 nvme2n1 : 1.02 12670.52 49.49 0.00 0.00 10049.30 6251.13 19862.45 00:13:02.652 [2024-12-13T23:46:33.384Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.652 nvme3n1 : 1.02 11682.16 45.63 0.00 0.00 10891.05 7108.14 22887.19 00:13:02.652 [2024-12-13T23:46:33.384Z] =================================================================================================================== 00:13:02.652 [2024-12-13T23:46:33.384Z] Total : 71111.34 277.78 0.00 0.00 10761.24 6251.13 24601.21 00:13:03.597 00:13:03.597 real 0m2.879s 00:13:03.597 user 0m2.135s 00:13:03.597 sys 0m0.569s 00:13:03.597 ************************************ 00:13:03.597 END TEST bdev_write_zeroes 00:13:03.597 ************************************ 00:13:03.598 23:46:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:03.598 23:46:34 -- common/autotest_common.sh@10 -- # set +x 00:13:03.598 23:46:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.598 23:46:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:03.598 23:46:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.598 23:46:34 -- common/autotest_common.sh@10 -- # set +x 00:13:03.598 ************************************ 00:13:03.598 START TEST bdev_json_nonenclosed 00:13:03.598 ************************************ 00:13:03.598 23:46:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.598 [2024-12-13 23:46:34.217256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:03.598 [2024-12-13 23:46:34.217633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68732 ] 00:13:03.859 [2024-12-13 23:46:34.371560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.121 [2024-12-13 23:46:34.592254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.121 [2024-12-13 23:46:34.592444] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:04.121 [2024-12-13 23:46:34.592465] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:04.382 ************************************ 00:13:04.382 00:13:04.382 real 0m0.756s 00:13:04.382 user 0m0.522s 00:13:04.382 sys 0m0.126s 00:13:04.382 23:46:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.382 23:46:34 -- common/autotest_common.sh@10 -- # set +x 00:13:04.382 END TEST bdev_json_nonenclosed 00:13:04.382 ************************************ 00:13:04.382 23:46:34 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.382 23:46:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:04.382 23:46:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.382 23:46:34 -- common/autotest_common.sh@10 -- # set +x 00:13:04.382 ************************************ 00:13:04.382 START TEST bdev_json_nonarray 00:13:04.382 ************************************ 00:13:04.382 23:46:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.382 [2024-12-13 23:46:35.037659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:04.382 [2024-12-13 23:46:35.037796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68762 ] 00:13:04.643 [2024-12-13 23:46:35.186995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.904 [2024-12-13 23:46:35.406066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.904 [2024-12-13 23:46:35.406281] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:04.904 [2024-12-13 23:46:35.406302] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:05.166 00:13:05.166 real 0m0.747s 00:13:05.166 user 0m0.511s 00:13:05.166 sys 0m0.129s 00:13:05.166 23:46:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:05.166 ************************************ 00:13:05.166 END TEST bdev_json_nonarray 00:13:05.166 ************************************ 00:13:05.166 23:46:35 -- common/autotest_common.sh@10 -- # set +x 00:13:05.166 23:46:35 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:05.166 23:46:35 -- bdev/blockdev.sh@809 -- # cleanup 00:13:05.166 23:46:35 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:05.166 23:46:35 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:05.166 23:46:35 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:05.166 23:46:35 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:06.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:09.448 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.448 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.745 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.745 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.745 00:13:14.745 real 1m3.142s 00:13:14.745 user 1m22.783s 00:13:14.745 sys 0m48.423s 00:13:14.745 23:46:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:14.745 23:46:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.745 ************************************ 00:13:14.745 END TEST blockdev_xnvme 00:13:14.745 ************************************ 00:13:14.745 23:46:44 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:14.745 23:46:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:14.745 23:46:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.745 23:46:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.745 ************************************ 00:13:14.745 START TEST ublk 00:13:14.745 ************************************ 00:13:14.745 23:46:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:14.745 * Looking for test storage... 00:13:14.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:14.745 23:46:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:14.745 23:46:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:14.745 23:46:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:14.745 23:46:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:14.745 23:46:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:14.745 23:46:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:14.745 23:46:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:14.745 23:46:45 -- scripts/common.sh@335 -- # IFS=.-: 00:13:14.745 23:46:45 -- scripts/common.sh@335 -- # read -ra ver1 00:13:14.745 23:46:45 -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.745 23:46:45 -- scripts/common.sh@336 -- # read -ra ver2 00:13:14.745 23:46:45 -- scripts/common.sh@337 -- # local 'op=<' 00:13:14.745 23:46:45 -- scripts/common.sh@339 -- # ver1_l=2 00:13:14.745 23:46:45 -- scripts/common.sh@340 -- # ver2_l=1 00:13:14.745 23:46:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:14.745 23:46:45 -- scripts/common.sh@343 -- # case "$op" in 00:13:14.745 23:46:45 -- scripts/common.sh@344 -- # : 1 00:13:14.745 23:46:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:14.745 23:46:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.745 23:46:45 -- scripts/common.sh@364 -- # decimal 1 00:13:14.745 23:46:45 -- scripts/common.sh@352 -- # local d=1 00:13:14.745 23:46:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.745 23:46:45 -- scripts/common.sh@354 -- # echo 1 00:13:14.745 23:46:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:14.745 23:46:45 -- scripts/common.sh@365 -- # decimal 2 00:13:14.745 23:46:45 -- scripts/common.sh@352 -- # local d=2 00:13:14.745 23:46:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.745 23:46:45 -- scripts/common.sh@354 -- # echo 2 00:13:14.745 23:46:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:14.745 23:46:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:14.745 23:46:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:14.745 23:46:45 -- scripts/common.sh@367 -- # return 0 00:13:14.745 23:46:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.745 23:46:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:14.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.745 --rc genhtml_branch_coverage=1 00:13:14.745 --rc genhtml_function_coverage=1 00:13:14.746 --rc genhtml_legend=1 00:13:14.746 --rc geninfo_all_blocks=1 00:13:14.746 --rc geninfo_unexecuted_blocks=1 00:13:14.746 00:13:14.746 ' 00:13:14.746 23:46:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:14.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.746 --rc genhtml_branch_coverage=1 00:13:14.746 --rc genhtml_function_coverage=1 00:13:14.746 --rc genhtml_legend=1 00:13:14.746 --rc geninfo_all_blocks=1 00:13:14.746 --rc geninfo_unexecuted_blocks=1 00:13:14.746 00:13:14.746 ' 00:13:14.746 23:46:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:14.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.746 --rc genhtml_branch_coverage=1 00:13:14.746 --rc genhtml_function_coverage=1 00:13:14.746 --rc genhtml_legend=1 00:13:14.746 --rc geninfo_all_blocks=1 00:13:14.746 --rc geninfo_unexecuted_blocks=1 00:13:14.746 00:13:14.746 ' 00:13:14.746 23:46:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:14.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.746 --rc genhtml_branch_coverage=1 00:13:14.746 --rc genhtml_function_coverage=1 00:13:14.746 --rc genhtml_legend=1 00:13:14.746 --rc geninfo_all_blocks=1 00:13:14.746 --rc geninfo_unexecuted_blocks=1 00:13:14.746 00:13:14.746 ' 00:13:14.746 23:46:45 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:14.746 23:46:45 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:14.746 23:46:45 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:14.746 23:46:45 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:14.746 23:46:45 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:14.746 23:46:45 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:14.746 23:46:45 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:14.746 23:46:45 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:14.746 23:46:45 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:14.746 23:46:45 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:14.746 23:46:45 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:14.746 23:46:45 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:14.746 23:46:45 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:14.746 23:46:45 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:14.746 23:46:45 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:14.746 23:46:45 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:14.746 23:46:45 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:14.746 23:46:45 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:14.746 23:46:45 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:14.746 23:46:45 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:14.746 23:46:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:14.746 23:46:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.746 23:46:45 -- common/autotest_common.sh@10 -- # set +x 00:13:14.746 ************************************ 00:13:14.746 START TEST test_save_ublk_config 00:13:14.746 ************************************ 00:13:14.746 23:46:45 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:14.746 23:46:45 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:14.746 23:46:45 -- ublk/ublk.sh@103 -- # tgtpid=69122 00:13:14.746 23:46:45 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:14.746 23:46:45 -- ublk/ublk.sh@106 -- # waitforlisten 69122 00:13:14.746 23:46:45 -- common/autotest_common.sh@829 -- # '[' -z 69122 ']' 00:13:14.746 23:46:45 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:14.746 23:46:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.746 23:46:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.746 23:46:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.746 23:46:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.746 23:46:45 -- common/autotest_common.sh@10 -- # set +x 00:13:14.746 [2024-12-13 23:46:45.227579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:14.746 [2024-12-13 23:46:45.227681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69122 ] 00:13:14.746 [2024-12-13 23:46:45.372859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.006 [2024-12-13 23:46:45.599648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:15.006 [2024-12-13 23:46:45.599892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.394 23:46:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.394 23:46:46 -- common/autotest_common.sh@862 -- # return 0 00:13:16.394 23:46:46 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:16.394 23:46:46 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:16.394 23:46:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.394 23:46:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.394 [2024-12-13 23:46:46.767359] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:16.394 malloc0 00:13:16.394 [2024-12-13 23:46:46.838628] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:16.394 [2024-12-13 23:46:46.838724] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:16.394 [2024-12-13 23:46:46.838735] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:16.394 [2024-12-13 23:46:46.838745] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:16.394 [2024-12-13 23:46:46.847604] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:16.394 [2024-12-13 23:46:46.847638] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:16.394 [2024-12-13 23:46:46.854511] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:16.394 [2024-12-13 23:46:46.854632] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:16.394 [2024-12-13 23:46:46.871506] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.394 0 00:13:16.394 23:46:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.394 23:46:46 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:16.394 23:46:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.394 23:46:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.655 23:46:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.655 23:46:47 -- ublk/ublk.sh@115 -- # config='{ 00:13:16.655 "subsystems": [ 00:13:16.655 { 00:13:16.655 "subsystem": "iobuf", 00:13:16.655 "config": [ 00:13:16.655 { 00:13:16.655 "method": "iobuf_set_options", 00:13:16.655 "params": { 00:13:16.655 "small_pool_count": 8192, 00:13:16.655 "large_pool_count": 1024, 00:13:16.655 "small_bufsize": 8192, 00:13:16.656 "large_bufsize": 135168 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "sock", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "sock_impl_set_options", 00:13:16.656 "params": { 00:13:16.656 "impl_name": "posix", 00:13:16.656 "recv_buf_size": 2097152, 00:13:16.656 "send_buf_size": 2097152, 00:13:16.656 "enable_recv_pipe": true, 00:13:16.656 "enable_quickack": false, 00:13:16.656 "enable_placement_id": 0, 00:13:16.656 "enable_zerocopy_send_server": true, 00:13:16.656 "enable_zerocopy_send_client": false, 00:13:16.656 "zerocopy_threshold": 0, 00:13:16.656 "tls_version": 0, 00:13:16.656 "enable_ktls": false 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "sock_impl_set_options", 00:13:16.656 "params": { 00:13:16.656 "impl_name": "ssl", 00:13:16.656 "recv_buf_size": 4096, 00:13:16.656 "send_buf_size": 4096, 00:13:16.656 "enable_recv_pipe": true, 00:13:16.656 "enable_quickack": false, 00:13:16.656 "enable_placement_id": 0, 00:13:16.656 "enable_zerocopy_send_server": true, 00:13:16.656 "enable_zerocopy_send_client": false, 00:13:16.656 "zerocopy_threshold": 0, 00:13:16.656 "tls_version": 0, 00:13:16.656 "enable_ktls": false 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "vmd", 00:13:16.656 "config": [] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "accel", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "accel_set_options", 00:13:16.656 "params": { 00:13:16.656 "small_cache_size": 128, 00:13:16.656 "large_cache_size": 16, 00:13:16.656 "task_count": 2048, 00:13:16.656 "sequence_count": 2048, 00:13:16.656 "buf_count": 2048 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "bdev", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "bdev_set_options", 00:13:16.656 "params": { 00:13:16.656 "bdev_io_pool_size": 65535, 00:13:16.656 "bdev_io_cache_size": 256, 00:13:16.656 "bdev_auto_examine": true, 00:13:16.656 "iobuf_small_cache_size": 128, 00:13:16.656 "iobuf_large_cache_size": 16 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "bdev_raid_set_options", 00:13:16.656 "params": { 00:13:16.656 "process_window_size_kb": 1024 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "bdev_iscsi_set_options", 00:13:16.656 "params": { 00:13:16.656 "timeout_sec": 30 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "bdev_nvme_set_options", 00:13:16.656 "params": { 00:13:16.656 "action_on_timeout": "none", 00:13:16.656 "timeout_us": 0, 00:13:16.656 "timeout_admin_us": 0, 00:13:16.656 "keep_alive_timeout_ms": 10000, 00:13:16.656 "transport_retry_count": 4, 00:13:16.656 "arbitration_burst": 0, 00:13:16.656 "low_priority_weight": 0, 00:13:16.656 "medium_priority_weight": 0, 00:13:16.656 "high_priority_weight": 0, 00:13:16.656 "nvme_adminq_poll_period_us": 10000, 00:13:16.656 "nvme_ioq_poll_period_us": 0, 00:13:16.656 "io_queue_requests": 0, 00:13:16.656 "delay_cmd_submit": true, 00:13:16.656 "bdev_retry_count": 3, 00:13:16.656 "transport_ack_timeout": 0, 00:13:16.656 "ctrlr_loss_timeout_sec": 0, 00:13:16.656 "reconnect_delay_sec": 0, 00:13:16.656 "fast_io_fail_timeout_sec": 0, 00:13:16.656 "generate_uuids": false, 00:13:16.656 "transport_tos": 0, 00:13:16.656 "io_path_stat": false, 00:13:16.656 "allow_accel_sequence": false 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "bdev_nvme_set_hotplug", 00:13:16.656 "params": { 00:13:16.656 "period_us": 100000, 00:13:16.656 "enable": false 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "bdev_malloc_create", 00:13:16.656 "params": { 00:13:16.656 "name": "malloc0", 00:13:16.656 "num_blocks": 8192, 00:13:16.656 "block_size": 4096, 00:13:16.656 "physical_block_size": 4096, 00:13:16.656 "uuid": "90dce04f-9f67-4139-8f21-3fbb890b3082", 00:13:16.656 "optimal_io_boundary": 0 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "bdev_wait_for_examine" 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "scsi", 00:13:16.656 "config": null 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "scheduler", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "framework_set_scheduler", 00:13:16.656 "params": { 00:13:16.656 "name": "static" 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "vhost_scsi", 00:13:16.656 "config": [] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "vhost_blk", 00:13:16.656 "config": [] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "ublk", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "ublk_create_target", 00:13:16.656 "params": { 00:13:16.656 "cpumask": "1" 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "ublk_start_disk", 00:13:16.656 "params": { 00:13:16.656 "bdev_name": "malloc0", 00:13:16.656 "ublk_id": 0, 00:13:16.656 "num_queues": 1, 00:13:16.656 "queue_depth": 128 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "nbd", 00:13:16.656 "config": [] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "nvmf", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "nvmf_set_config", 00:13:16.656 "params": { 00:13:16.656 "discovery_filter": "match_any", 00:13:16.656 "admin_cmd_passthru": { 00:13:16.656 "identify_ctrlr": false 00:13:16.656 } 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "nvmf_set_max_subsystems", 00:13:16.656 "params": { 00:13:16.656 "max_subsystems": 1024 00:13:16.656 } 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "method": "nvmf_set_crdt", 00:13:16.656 "params": { 00:13:16.656 "crdt1": 0, 00:13:16.656 "crdt2": 0, 00:13:16.656 "crdt3": 0 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }, 00:13:16.656 { 00:13:16.656 "subsystem": "iscsi", 00:13:16.656 "config": [ 00:13:16.656 { 00:13:16.656 "method": "iscsi_set_options", 00:13:16.656 "params": { 00:13:16.656 "node_base": "iqn.2016-06.io.spdk", 00:13:16.656 "max_sessions": 128, 00:13:16.656 "max_connections_per_session": 2, 00:13:16.656 "max_queue_depth": 64, 00:13:16.656 "default_time2wait": 2, 00:13:16.656 "default_time2retain": 20, 00:13:16.656 "first_burst_length": 8192, 00:13:16.656 "immediate_data": true, 00:13:16.656 "allow_duplicated_isid": false, 00:13:16.656 "error_recovery_level": 0, 00:13:16.656 "nop_timeout": 60, 00:13:16.656 "nop_in_interval": 30, 00:13:16.656 "disable_chap": false, 00:13:16.656 "require_chap": false, 00:13:16.656 "mutual_chap": false, 00:13:16.656 "chap_group": 0, 00:13:16.656 "max_large_datain_per_connection": 64, 00:13:16.656 "max_r2t_per_connection": 4, 00:13:16.656 "pdu_pool_size": 36864, 00:13:16.656 "immediate_data_pool_size": 16384, 00:13:16.656 "data_out_pool_size": 2048 00:13:16.656 } 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 } 00:13:16.656 ] 00:13:16.656 }' 00:13:16.656 23:46:47 -- ublk/ublk.sh@116 -- # killprocess 69122 00:13:16.656 23:46:47 -- common/autotest_common.sh@936 -- # '[' -z 69122 ']' 00:13:16.656 23:46:47 -- common/autotest_common.sh@940 -- # kill -0 69122 00:13:16.656 23:46:47 -- common/autotest_common.sh@941 -- # uname 00:13:16.656 23:46:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:16.656 23:46:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69122 00:13:16.656 23:46:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:16.656 23:46:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:16.656 killing process with pid 69122 00:13:16.656 23:46:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69122' 00:13:16.656 23:46:47 -- common/autotest_common.sh@955 -- # kill 69122 00:13:16.656 23:46:47 -- common/autotest_common.sh@960 -- # wait 69122 00:13:17.598 [2024-12-13 23:46:48.257506] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:17.598 [2024-12-13 23:46:48.293568] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:17.598 [2024-12-13 23:46:48.293667] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:17.599 [2024-12-13 23:46:48.303494] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:17.599 [2024-12-13 23:46:48.303549] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:17.599 [2024-12-13 23:46:48.303560] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:17.599 [2024-12-13 23:46:48.303583] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:17.599 [2024-12-13 23:46:48.303692] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:18.974 23:46:49 -- ublk/ublk.sh@119 -- # tgtpid=69185 00:13:18.974 23:46:49 -- ublk/ublk.sh@121 -- # waitforlisten 69185 00:13:18.974 23:46:49 -- common/autotest_common.sh@829 -- # '[' -z 69185 ']' 00:13:18.974 23:46:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.974 23:46:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.974 23:46:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.974 23:46:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.974 23:46:49 -- common/autotest_common.sh@10 -- # set +x 00:13:18.974 23:46:49 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:18.974 23:46:49 -- ublk/ublk.sh@118 -- # echo '{ 00:13:18.974 "subsystems": [ 00:13:18.974 { 00:13:18.974 "subsystem": "iobuf", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "iobuf_set_options", 00:13:18.974 "params": { 00:13:18.974 "small_pool_count": 8192, 00:13:18.974 "large_pool_count": 1024, 00:13:18.974 "small_bufsize": 8192, 00:13:18.974 "large_bufsize": 135168 00:13:18.974 } 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "sock", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "sock_impl_set_options", 00:13:18.974 "params": { 00:13:18.974 "impl_name": "posix", 00:13:18.974 "recv_buf_size": 2097152, 00:13:18.974 "send_buf_size": 2097152, 00:13:18.974 "enable_recv_pipe": true, 00:13:18.974 "enable_quickack": false, 00:13:18.974 "enable_placement_id": 0, 00:13:18.974 "enable_zerocopy_send_server": true, 00:13:18.974 "enable_zerocopy_send_client": false, 00:13:18.974 "zerocopy_threshold": 0, 00:13:18.974 "tls_version": 0, 00:13:18.974 "enable_ktls": false 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "sock_impl_set_options", 00:13:18.974 "params": { 00:13:18.974 "impl_name": "ssl", 00:13:18.974 "recv_buf_size": 4096, 00:13:18.974 "send_buf_size": 4096, 00:13:18.974 "enable_recv_pipe": true, 00:13:18.974 "enable_quickack": false, 00:13:18.974 "enable_placement_id": 0, 00:13:18.974 "enable_zerocopy_send_server": true, 00:13:18.974 "enable_zerocopy_send_client": false, 00:13:18.974 "zerocopy_threshold": 0, 00:13:18.974 "tls_version": 0, 00:13:18.974 "enable_ktls": false 00:13:18.974 } 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "vmd", 00:13:18.974 "config": [] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "accel", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "accel_set_options", 00:13:18.974 "params": { 00:13:18.974 "small_cache_size": 128, 00:13:18.974 "large_cache_size": 16, 00:13:18.974 "task_count": 2048, 00:13:18.974 "sequence_count": 2048, 00:13:18.974 "buf_count": 2048 00:13:18.974 } 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "bdev", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "bdev_set_options", 00:13:18.974 "params": { 00:13:18.974 "bdev_io_pool_size": 65535, 00:13:18.974 "bdev_io_cache_size": 256, 00:13:18.974 "bdev_auto_examine": true, 00:13:18.974 "iobuf_small_cache_size": 128, 00:13:18.974 "iobuf_large_cache_size": 16 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "bdev_raid_set_options", 00:13:18.974 "params": { 00:13:18.974 "process_window_size_kb": 1024 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "bdev_iscsi_set_options", 00:13:18.974 "params": { 00:13:18.974 "timeout_sec": 30 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "bdev_nvme_set_options", 00:13:18.974 "params": { 00:13:18.974 "action_on_timeout": "none", 00:13:18.974 "timeout_us": 0, 00:13:18.974 "timeout_admin_us": 0, 00:13:18.974 "keep_alive_timeout_ms": 10000, 00:13:18.974 "transport_retry_count": 4, 00:13:18.974 "arbitration_burst": 0, 00:13:18.974 "low_priority_weight": 0, 00:13:18.974 "medium_priority_weight": 0, 00:13:18.974 "high_priority_weight": 0, 00:13:18.974 "nvme_adminq_poll_period_us": 10000, 00:13:18.974 "nvme_ioq_poll_period_us": 0, 00:13:18.974 "io_queue_requests": 0, 00:13:18.974 "delay_cmd_submit": true, 00:13:18.974 "bdev_retry_count": 3, 00:13:18.974 "transport_ack_timeout": 0, 00:13:18.974 "ctrlr_loss_timeout_sec": 0, 00:13:18.974 "reconnect_delay_sec": 0, 00:13:18.974 "fast_io_fail_timeout_sec": 0, 00:13:18.974 "generate_uuids": false, 00:13:18.974 "transport_tos": 0, 00:13:18.974 "io_path_stat": false, 00:13:18.974 "allow_accel_sequence": false 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "bdev_nvme_set_hotplug", 00:13:18.974 "params": { 00:13:18.974 "period_us": 100000, 00:13:18.974 "enable": false 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "bdev_malloc_create", 00:13:18.974 "params": { 00:13:18.974 "name": "malloc0", 00:13:18.974 "num_blocks": 8192, 00:13:18.974 "block_size": 4096, 00:13:18.974 "physical_block_size": 4096, 00:13:18.974 "uuid": "90dce04f-9f67-4139-8f21-3fbb890b3082", 00:13:18.974 "optimal_io_boundary": 0 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "bdev_wait_for_examine" 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "scsi", 00:13:18.974 "config": null 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "scheduler", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "framework_set_scheduler", 00:13:18.974 "params": { 00:13:18.974 "name": "static" 00:13:18.974 } 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "vhost_scsi", 00:13:18.974 "config": [] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "vhost_blk", 00:13:18.974 "config": [] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "ublk", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "ublk_create_target", 00:13:18.974 "params": { 00:13:18.974 "cpumask": "1" 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "ublk_start_disk", 00:13:18.974 "params": { 00:13:18.974 "bdev_name": "malloc0", 00:13:18.974 "ublk_id": 0, 00:13:18.974 "num_queues": 1, 00:13:18.974 "queue_depth": 128 00:13:18.974 } 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "nbd", 00:13:18.974 "config": [] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "nvmf", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "nvmf_set_config", 00:13:18.974 "params": { 00:13:18.974 "discovery_filter": "match_any", 00:13:18.974 "admin_cmd_passthru": { 00:13:18.974 "identify_ctrlr": false 00:13:18.974 } 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "nvmf_set_max_subsystems", 00:13:18.974 "params": { 00:13:18.974 "max_subsystems": 1024 00:13:18.974 } 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "method": "nvmf_set_crdt", 00:13:18.974 "params": { 00:13:18.974 "crdt1": 0, 00:13:18.974 "crdt2": 0, 00:13:18.974 "crdt3": 0 00:13:18.974 } 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "subsystem": "iscsi", 00:13:18.974 "config": [ 00:13:18.974 { 00:13:18.974 "method": "iscsi_set_options", 00:13:18.974 "params": { 00:13:18.974 "node_base": "iqn.2016-06.io.spdk", 00:13:18.974 "max_sessions": 128, 00:13:18.974 "max_connections_per_session": 2, 00:13:18.974 "max_queue_depth": 64, 00:13:18.974 "default_time2wait": 2, 00:13:18.974 "default_time2retain": 20, 00:13:18.974 "first_burst_length": 8192, 00:13:18.974 "immediate_data": true, 00:13:18.974 "allow_duplicated_isid": false, 00:13:18.974 "error_recovery_level": 0, 00:13:18.974 "nop_timeout": 60, 00:13:18.974 "nop_in_interval": 30, 00:13:18.974 "disable_chap": false, 00:13:18.974 "require_chap": false, 00:13:18.974 "mutual_chap": false, 00:13:18.974 "chap_group": 0, 00:13:18.974 "max_large_datain_per_connection": 64, 00:13:18.974 "max_r2t_per_connection": 4, 00:13:18.975 "pdu_pool_size": 36864, 00:13:18.975 "immediate_data_pool_size": 16384, 00:13:18.975 "data_out_pool_size": 2048 00:13:18.975 } 00:13:18.975 } 00:13:18.975 ] 00:13:18.975 } 00:13:18.975 ] 00:13:18.975 }' 00:13:18.975 [2024-12-13 23:46:49.552150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:18.975 [2024-12-13 23:46:49.552273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69185 ] 00:13:18.975 [2024-12-13 23:46:49.702273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.233 [2024-12-13 23:46:49.870944] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.233 [2024-12-13 23:46:49.871094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.798 [2024-12-13 23:46:50.452080] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:19.798 [2024-12-13 23:46:50.459576] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:19.798 [2024-12-13 23:46:50.459655] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:19.798 [2024-12-13 23:46:50.459661] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:19.798 [2024-12-13 23:46:50.459667] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:19.798 [2024-12-13 23:46:50.468557] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:19.798 [2024-12-13 23:46:50.468571] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:19.798 [2024-12-13 23:46:50.475499] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:19.798 [2024-12-13 23:46:50.475569] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:19.798 [2024-12-13 23:46:50.492503] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.364 23:46:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.364 23:46:51 -- common/autotest_common.sh@862 -- # return 0 00:13:20.364 23:46:51 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:20.364 23:46:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.364 23:46:51 -- common/autotest_common.sh@10 -- # set +x 00:13:20.364 23:46:51 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:20.364 23:46:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.364 23:46:51 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:20.364 23:46:51 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:20.364 23:46:51 -- ublk/ublk.sh@125 -- # killprocess 69185 00:13:20.364 23:46:51 -- common/autotest_common.sh@936 -- # '[' -z 69185 ']' 00:13:20.364 23:46:51 -- common/autotest_common.sh@940 -- # kill -0 69185 00:13:20.364 23:46:51 -- common/autotest_common.sh@941 -- # uname 00:13:20.364 23:46:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:20.364 23:46:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69185 00:13:20.623 killing process with pid 69185 00:13:20.623 23:46:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:20.623 23:46:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:20.623 23:46:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69185' 00:13:20.623 23:46:51 -- common/autotest_common.sh@955 -- # kill 69185 00:13:20.623 23:46:51 -- common/autotest_common.sh@960 -- # wait 69185 00:13:21.189 [2024-12-13 23:46:51.834941] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.189 [2024-12-13 23:46:51.870571] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.189 [2024-12-13 23:46:51.870662] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.189 [2024-12-13 23:46:51.880503] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.189 [2024-12-13 23:46:51.880541] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:21.189 [2024-12-13 23:46:51.880547] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:21.190 [2024-12-13 23:46:51.880566] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:21.190 [2024-12-13 23:46:51.880673] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:22.568 23:46:53 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:22.568 00:13:22.568 real 0m7.898s 00:13:22.568 user 0m5.928s 00:13:22.568 sys 0m2.931s 00:13:22.568 23:46:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.568 ************************************ 00:13:22.568 END TEST test_save_ublk_config 00:13:22.568 ************************************ 00:13:22.568 23:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:22.568 23:46:53 -- ublk/ublk.sh@139 -- # spdk_pid=69260 00:13:22.568 23:46:53 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.568 23:46:53 -- ublk/ublk.sh@141 -- # waitforlisten 69260 00:13:22.568 23:46:53 -- common/autotest_common.sh@829 -- # '[' -z 69260 ']' 00:13:22.568 23:46:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.568 23:46:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.568 23:46:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.568 23:46:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.568 23:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:22.568 23:46:53 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:22.568 [2024-12-13 23:46:53.169263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.568 [2024-12-13 23:46:53.169377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69260 ] 00:13:22.827 [2024-12-13 23:46:53.314984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.827 [2024-12-13 23:46:53.460908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.827 [2024-12-13 23:46:53.461291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.827 [2024-12-13 23:46:53.461377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.394 23:46:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.394 23:46:53 -- common/autotest_common.sh@862 -- # return 0 00:13:23.394 23:46:53 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:23.394 23:46:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:23.394 23:46:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.394 23:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:23.394 ************************************ 00:13:23.394 START TEST test_create_ublk 00:13:23.394 ************************************ 00:13:23.394 23:46:53 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:23.394 23:46:53 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:23.394 23:46:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.394 23:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:23.394 [2024-12-13 23:46:53.998996] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:23.394 23:46:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.394 23:46:53 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:23.394 23:46:53 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:23.394 23:46:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.394 23:46:54 -- common/autotest_common.sh@10 -- # set +x 00:13:23.652 23:46:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:23.652 23:46:54 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:23.652 23:46:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.652 23:46:54 -- common/autotest_common.sh@10 -- # set +x 00:13:23.652 [2024-12-13 23:46:54.158590] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:23.652 [2024-12-13 23:46:54.158886] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:23.652 [2024-12-13 23:46:54.158894] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:23.652 [2024-12-13 23:46:54.158900] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:23.652 [2024-12-13 23:46:54.167661] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:23.652 [2024-12-13 23:46:54.167680] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:23.652 [2024-12-13 23:46:54.174497] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:23.652 [2024-12-13 23:46:54.181659] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:23.652 [2024-12-13 23:46:54.205501] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:23.652 23:46:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:23.652 23:46:54 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:23.652 23:46:54 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:23.652 23:46:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.652 23:46:54 -- common/autotest_common.sh@10 -- # set +x 00:13:23.652 23:46:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:23.652 { 00:13:23.652 "ublk_device": "/dev/ublkb0", 00:13:23.652 "id": 0, 00:13:23.652 "queue_depth": 512, 00:13:23.652 "num_queues": 4, 00:13:23.652 "bdev_name": "Malloc0" 00:13:23.652 } 00:13:23.652 ]' 00:13:23.652 23:46:54 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:23.652 23:46:54 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:23.652 23:46:54 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:23.652 23:46:54 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:23.652 23:46:54 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:23.652 23:46:54 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:23.910 23:46:54 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:23.910 23:46:54 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:23.910 23:46:54 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:23.910 23:46:54 -- lvol/common.sh@41 -- # local offset=0 00:13:23.910 23:46:54 -- lvol/common.sh@42 -- # local size=134217728 00:13:23.910 23:46:54 -- lvol/common.sh@43 -- # local rw=write 00:13:23.910 23:46:54 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:23.910 23:46:54 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:23.910 23:46:54 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:23.910 23:46:54 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:23.910 23:46:54 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:23.910 23:46:54 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:23.910 23:46:54 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:23.910 fio: verification read phase will never start because write phase uses all of runtime 00:13:23.910 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:23.910 fio-3.35 00:13:23.910 Starting 1 process 00:13:36.105 00:13:36.105 fio_test: (groupid=0, jobs=1): err= 0: pid=69300: Fri Dec 13 23:47:04 2024 00:13:36.105 write: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(558MiB/10001msec); 0 zone resets 00:13:36.105 clat (usec): min=32, max=12181, avg=69.39, stdev=155.25 00:13:36.105 lat (usec): min=32, max=12181, avg=69.74, stdev=155.27 00:13:36.105 clat percentiles (usec): 00:13:36.105 | 1.00th=[ 47], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:13:36.105 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:36.105 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 72], 00:13:36.105 | 99.00th=[ 81], 99.50th=[ 88], 99.90th=[ 3294], 99.95th=[ 3884], 00:13:36.105 | 99.99th=[ 4080] 00:13:36.105 bw ( KiB/s): min=31120, max=61480, per=99.73%, avg=56979.16, stdev=9740.64, samples=19 00:13:36.105 iops : min= 7780, max=15370, avg=14244.79, stdev=2435.16, samples=19 00:13:36.105 lat (usec) : 50=2.06%, 100=97.55%, 250=0.11%, 500=0.02%, 750=0.01% 00:13:36.105 lat (usec) : 1000=0.01% 00:13:36.105 lat (msec) : 2=0.05%, 4=0.17%, 10=0.02%, 20=0.01% 00:13:36.105 cpu : usr=2.13%, sys=9.43%, ctx=142845, majf=0, minf=796 00:13:36.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.105 issued rwts: total=0,142845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:36.105 00:13:36.105 Run status group 0 (all jobs): 00:13:36.106 WRITE: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=558MiB (585MB), run=10001-10001msec 00:13:36.106 00:13:36.106 Disk stats (read/write): 00:13:36.106 ublkb0: ios=0/141248, merge=0/0, ticks=0/8835, in_queue=8836, util=99.06% 00:13:36.106 23:47:04 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:36.106 23:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:04 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:04.651158] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.106 [2024-12-13 23:47:04.693540] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.106 [2024-12-13 23:47:04.694217] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.106 [2024-12-13 23:47:04.701526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.106 [2024-12-13 23:47:04.701780] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:36.106 [2024-12-13 23:47:04.701788] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:36.106 23:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:04 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:36.106 23:47:04 -- common/autotest_common.sh@650 -- # local es=0 00:13:36.106 23:47:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:36.106 23:47:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:36.106 23:47:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.106 23:47:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:36.106 23:47:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.106 23:47:04 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:36.106 23:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:04 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:04.717584] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:36.106 request: 00:13:36.106 { 00:13:36.106 "ublk_id": 0, 00:13:36.106 "method": "ublk_stop_disk", 00:13:36.106 "req_id": 1 00:13:36.106 } 00:13:36.106 Got JSON-RPC error response 00:13:36.106 response: 00:13:36.106 { 00:13:36.106 "code": -19, 00:13:36.106 "message": "No such device" 00:13:36.106 } 00:13:36.106 23:47:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.106 23:47:04 -- common/autotest_common.sh@653 -- # es=1 00:13:36.106 23:47:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.106 23:47:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.106 23:47:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.106 23:47:04 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:36.106 23:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:04 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:04.733549] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:36.106 [2024-12-13 23:47:04.741500] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:36.106 [2024-12-13 23:47:04.741527] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:36.106 23:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:04 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:36.106 23:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:04 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:36.106 23:47:05 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:36.106 23:47:05 -- lvol/common.sh@26 -- # jq length 00:13:36.106 23:47:05 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:36.106 23:47:05 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:36.106 23:47:05 -- lvol/common.sh@28 -- # jq length 00:13:36.106 23:47:05 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:36.106 00:13:36.106 real 0m11.194s 00:13:36.106 user 0m0.530s 00:13:36.106 sys 0m1.021s 00:13:36.106 23:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:36.106 ************************************ 00:13:36.106 END TEST test_create_ublk 00:13:36.106 ************************************ 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:36.106 23:47:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:36.106 23:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 ************************************ 00:13:36.106 START TEST test_create_multi_ublk 00:13:36.106 ************************************ 00:13:36.106 23:47:05 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:36.106 23:47:05 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:05.234015] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:36.106 23:47:05 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:36.106 23:47:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.106 23:47:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:36.106 23:47:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:05.448592] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:36.106 [2024-12-13 23:47:05.448919] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:36.106 [2024-12-13 23:47:05.448925] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:36.106 [2024-12-13 23:47:05.448931] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.106 [2024-12-13 23:47:05.472507] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.106 [2024-12-13 23:47:05.472529] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.106 [2024-12-13 23:47:05.484499] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.106 [2024-12-13 23:47:05.484985] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:36.106 [2024-12-13 23:47:05.520502] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:36.106 23:47:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.106 23:47:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:36.106 23:47:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:05.748595] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:36.106 [2024-12-13 23:47:05.748894] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:36.106 [2024-12-13 23:47:05.748902] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:36.106 [2024-12-13 23:47:05.748907] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.106 [2024-12-13 23:47:05.756512] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.106 [2024-12-13 23:47:05.756528] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.106 [2024-12-13 23:47:05.764512] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.106 [2024-12-13 23:47:05.765000] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:36.106 [2024-12-13 23:47:05.781517] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:36.106 23:47:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.106 23:47:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.106 23:47:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:36.106 23:47:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:36.106 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.106 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-13 23:47:05.940608] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:36.106 [2024-12-13 23:47:05.940901] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:36.106 [2024-12-13 23:47:05.940912] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:36.106 [2024-12-13 23:47:05.940920] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.106 [2024-12-13 23:47:05.948515] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.107 [2024-12-13 23:47:05.948533] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.107 [2024-12-13 23:47:05.956503] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.107 [2024-12-13 23:47:05.957002] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:36.107 [2024-12-13 23:47:05.965538] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.107 23:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.107 23:47:05 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:36.107 23:47:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:36.107 23:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.107 23:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 23:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:36.107 23:47:06 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:36.107 23:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.107 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 [2024-12-13 23:47:06.132597] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:36.107 [2024-12-13 23:47:06.132889] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:36.107 [2024-12-13 23:47:06.132898] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:36.107 [2024-12-13 23:47:06.132902] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.107 [2024-12-13 23:47:06.140526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.107 [2024-12-13 23:47:06.140542] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.107 [2024-12-13 23:47:06.148504] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.107 [2024-12-13 23:47:06.148987] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:36.107 [2024-12-13 23:47:06.152274] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.107 23:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:36.107 23:47:06 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:36.107 23:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.107 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 23:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:36.107 { 00:13:36.107 "ublk_device": "/dev/ublkb0", 00:13:36.107 "id": 0, 00:13:36.107 "queue_depth": 512, 00:13:36.107 "num_queues": 4, 00:13:36.107 "bdev_name": "Malloc0" 00:13:36.107 }, 00:13:36.107 { 00:13:36.107 "ublk_device": "/dev/ublkb1", 00:13:36.107 "id": 1, 00:13:36.107 "queue_depth": 512, 00:13:36.107 "num_queues": 4, 00:13:36.107 "bdev_name": "Malloc1" 00:13:36.107 }, 00:13:36.107 { 00:13:36.107 "ublk_device": "/dev/ublkb2", 00:13:36.107 "id": 2, 00:13:36.107 "queue_depth": 512, 00:13:36.107 "num_queues": 4, 00:13:36.107 "bdev_name": "Malloc2" 00:13:36.107 }, 00:13:36.107 { 00:13:36.107 "ublk_device": "/dev/ublkb3", 00:13:36.107 "id": 3, 00:13:36.107 "queue_depth": 512, 00:13:36.107 "num_queues": 4, 00:13:36.107 "bdev_name": "Malloc3" 00:13:36.107 } 00:13:36.107 ]' 00:13:36.107 23:47:06 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:36.107 23:47:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:36.107 23:47:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:36.107 23:47:06 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:36.107 23:47:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:36.107 23:47:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:36.107 23:47:06 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:36.107 23:47:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:36.107 23:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.107 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 [2024-12-13 23:47:06.770570] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.107 [2024-12-13 23:47:06.815544] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.107 [2024-12-13 23:47:06.816342] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.107 [2024-12-13 23:47:06.816733] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.107 [2024-12-13 23:47:06.816982] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:36.107 [2024-12-13 23:47:06.816990] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:36.107 23:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.107 23:47:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.107 23:47:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:36.107 23:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.107 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 [2024-12-13 23:47:06.827560] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.366 [2024-12-13 23:47:06.875546] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.366 [2024-12-13 23:47:06.876283] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.366 [2024-12-13 23:47:06.883512] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.366 [2024-12-13 23:47:06.883757] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:36.366 [2024-12-13 23:47:06.883768] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:36.366 23:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.366 23:47:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.366 23:47:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:36.366 23:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.366 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.366 [2024-12-13 23:47:06.899550] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.366 [2024-12-13 23:47:06.928899] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.366 [2024-12-13 23:47:06.930037] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.366 [2024-12-13 23:47:06.939508] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.366 [2024-12-13 23:47:06.939734] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:36.366 [2024-12-13 23:47:06.939743] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:36.366 23:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.366 23:47:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.366 23:47:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:36.366 23:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.366 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.366 [2024-12-13 23:47:06.955558] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.366 [2024-12-13 23:47:06.992993] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.366 [2024-12-13 23:47:06.993982] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.366 [2024-12-13 23:47:06.999726] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.366 [2024-12-13 23:47:06.999948] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:36.366 [2024-12-13 23:47:06.999955] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:36.366 23:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.366 23:47:07 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:36.625 [2024-12-13 23:47:07.186567] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:36.625 [2024-12-13 23:47:07.194496] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:36.625 [2024-12-13 23:47:07.194523] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:36.625 23:47:07 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:36.625 23:47:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.625 23:47:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:36.625 23:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.625 23:47:07 -- common/autotest_common.sh@10 -- # set +x 00:13:36.883 23:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.883 23:47:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.883 23:47:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:36.883 23:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.883 23:47:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.450 23:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.450 23:47:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.450 23:47:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:37.450 23:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.450 23:47:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.450 23:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.450 23:47:08 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.450 23:47:08 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:37.450 23:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.450 23:47:08 -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 23:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.708 23:47:08 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:37.708 23:47:08 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:37.708 23:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.708 23:47:08 -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 23:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.708 23:47:08 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:37.708 23:47:08 -- lvol/common.sh@26 -- # jq length 00:13:37.708 23:47:08 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:37.708 23:47:08 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:37.708 23:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.708 23:47:08 -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 23:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.708 23:47:08 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:37.708 23:47:08 -- lvol/common.sh@28 -- # jq length 00:13:37.708 23:47:08 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:37.708 00:13:37.709 real 0m3.158s 00:13:37.709 user 0m0.765s 00:13:37.709 sys 0m0.144s 00:13:37.709 ************************************ 00:13:37.709 END TEST test_create_multi_ublk 00:13:37.709 ************************************ 00:13:37.709 23:47:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:37.709 23:47:08 -- common/autotest_common.sh@10 -- # set +x 00:13:37.709 23:47:08 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:37.709 23:47:08 -- ublk/ublk.sh@147 -- # cleanup 00:13:37.709 23:47:08 -- ublk/ublk.sh@130 -- # killprocess 69260 00:13:37.709 23:47:08 -- common/autotest_common.sh@936 -- # '[' -z 69260 ']' 00:13:37.709 23:47:08 -- common/autotest_common.sh@940 -- # kill -0 69260 00:13:37.709 23:47:08 -- common/autotest_common.sh@941 -- # uname 00:13:37.709 23:47:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.709 23:47:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69260 00:13:37.709 23:47:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:37.709 23:47:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:37.709 killing process with pid 69260 00:13:37.709 23:47:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69260' 00:13:37.709 23:47:08 -- common/autotest_common.sh@955 -- # kill 69260 00:13:37.709 23:47:08 -- common/autotest_common.sh@960 -- # wait 69260 00:13:38.276 [2024-12-13 23:47:08.945883] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:38.276 [2024-12-13 23:47:08.945933] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:39.212 00:13:39.212 real 0m24.626s 00:13:39.212 user 0m35.331s 00:13:39.212 sys 0m8.787s 00:13:39.212 ************************************ 00:13:39.212 END TEST ublk 00:13:39.212 ************************************ 00:13:39.212 23:47:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.212 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.212 23:47:09 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:39.212 23:47:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:39.212 23:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.212 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.212 ************************************ 00:13:39.212 START TEST ublk_recovery 00:13:39.212 ************************************ 00:13:39.212 23:47:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:39.212 * Looking for test storage... 00:13:39.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:39.212 23:47:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:39.212 23:47:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:39.212 23:47:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:39.212 23:47:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:39.212 23:47:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:39.212 23:47:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:39.212 23:47:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:39.212 23:47:09 -- scripts/common.sh@335 -- # IFS=.-: 00:13:39.212 23:47:09 -- scripts/common.sh@335 -- # read -ra ver1 00:13:39.212 23:47:09 -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.212 23:47:09 -- scripts/common.sh@336 -- # read -ra ver2 00:13:39.212 23:47:09 -- scripts/common.sh@337 -- # local 'op=<' 00:13:39.212 23:47:09 -- scripts/common.sh@339 -- # ver1_l=2 00:13:39.212 23:47:09 -- scripts/common.sh@340 -- # ver2_l=1 00:13:39.212 23:47:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:39.212 23:47:09 -- scripts/common.sh@343 -- # case "$op" in 00:13:39.212 23:47:09 -- scripts/common.sh@344 -- # : 1 00:13:39.212 23:47:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:39.212 23:47:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.212 23:47:09 -- scripts/common.sh@364 -- # decimal 1 00:13:39.212 23:47:09 -- scripts/common.sh@352 -- # local d=1 00:13:39.212 23:47:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.212 23:47:09 -- scripts/common.sh@354 -- # echo 1 00:13:39.212 23:47:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:39.212 23:47:09 -- scripts/common.sh@365 -- # decimal 2 00:13:39.212 23:47:09 -- scripts/common.sh@352 -- # local d=2 00:13:39.212 23:47:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.212 23:47:09 -- scripts/common.sh@354 -- # echo 2 00:13:39.212 23:47:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:39.212 23:47:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:39.212 23:47:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:39.212 23:47:09 -- scripts/common.sh@367 -- # return 0 00:13:39.212 23:47:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.212 23:47:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.212 --rc genhtml_branch_coverage=1 00:13:39.212 --rc genhtml_function_coverage=1 00:13:39.212 --rc genhtml_legend=1 00:13:39.212 --rc geninfo_all_blocks=1 00:13:39.212 --rc geninfo_unexecuted_blocks=1 00:13:39.212 00:13:39.212 ' 00:13:39.212 23:47:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.212 --rc genhtml_branch_coverage=1 00:13:39.212 --rc genhtml_function_coverage=1 00:13:39.212 --rc genhtml_legend=1 00:13:39.212 --rc geninfo_all_blocks=1 00:13:39.212 --rc geninfo_unexecuted_blocks=1 00:13:39.212 00:13:39.212 ' 00:13:39.212 23:47:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.212 --rc genhtml_branch_coverage=1 00:13:39.212 --rc genhtml_function_coverage=1 00:13:39.212 --rc genhtml_legend=1 00:13:39.212 --rc geninfo_all_blocks=1 00:13:39.212 --rc geninfo_unexecuted_blocks=1 00:13:39.212 00:13:39.212 ' 00:13:39.212 23:47:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.212 --rc genhtml_branch_coverage=1 00:13:39.213 --rc genhtml_function_coverage=1 00:13:39.213 --rc genhtml_legend=1 00:13:39.213 --rc geninfo_all_blocks=1 00:13:39.213 --rc geninfo_unexecuted_blocks=1 00:13:39.213 00:13:39.213 ' 00:13:39.213 23:47:09 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:39.213 23:47:09 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:39.213 23:47:09 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:39.213 23:47:09 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:39.213 23:47:09 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:39.213 23:47:09 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:39.213 23:47:09 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:39.213 23:47:09 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:39.213 23:47:09 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:39.213 23:47:09 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:39.213 23:47:09 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69644 00:13:39.213 23:47:09 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.213 23:47:09 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69644 00:13:39.213 23:47:09 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:39.213 23:47:09 -- common/autotest_common.sh@829 -- # '[' -z 69644 ']' 00:13:39.213 23:47:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.213 23:47:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.213 23:47:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.213 23:47:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.213 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.213 [2024-12-13 23:47:09.891405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:39.213 [2024-12-13 23:47:09.891530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69644 ] 00:13:39.473 [2024-12-13 23:47:10.041059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.734 [2024-12-13 23:47:10.237771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.734 [2024-12-13 23:47:10.238245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.734 [2024-12-13 23:47:10.238343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.675 23:47:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.675 23:47:11 -- common/autotest_common.sh@862 -- # return 0 00:13:40.675 23:47:11 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:40.675 23:47:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.675 23:47:11 -- common/autotest_common.sh@10 -- # set +x 00:13:40.675 [2024-12-13 23:47:11.386020] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:40.675 23:47:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.675 23:47:11 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:40.675 23:47:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.675 23:47:11 -- common/autotest_common.sh@10 -- # set +x 00:13:40.959 malloc0 00:13:40.959 23:47:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.959 23:47:11 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:40.959 23:47:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.959 23:47:11 -- common/autotest_common.sh@10 -- # set +x 00:13:40.959 [2024-12-13 23:47:11.472607] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:40.959 [2024-12-13 23:47:11.472691] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:40.959 [2024-12-13 23:47:11.472698] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:40.959 [2024-12-13 23:47:11.472705] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:40.959 [2024-12-13 23:47:11.481582] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:40.959 [2024-12-13 23:47:11.481602] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:40.959 [2024-12-13 23:47:11.488512] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:40.959 [2024-12-13 23:47:11.488627] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:40.959 [2024-12-13 23:47:11.504516] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:40.959 1 00:13:40.959 23:47:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.959 23:47:11 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:41.894 23:47:12 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69692 00:13:41.894 23:47:12 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:41.894 23:47:12 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:41.894 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.894 fio-3.35 00:13:41.894 Starting 1 process 00:13:47.161 23:47:17 -- ublk/ublk_recovery.sh@36 -- # kill -9 69644 00:13:47.161 23:47:17 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:52.448 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69644 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:52.448 23:47:22 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69803 00:13:52.448 23:47:22 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.448 23:47:22 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69803 00:13:52.448 23:47:22 -- common/autotest_common.sh@829 -- # '[' -z 69803 ']' 00:13:52.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.448 23:47:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.448 23:47:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.448 23:47:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.448 23:47:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.448 23:47:22 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:52.448 23:47:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.448 [2024-12-13 23:47:22.582826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:52.448 [2024-12-13 23:47:22.582916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69803 ] 00:13:52.448 [2024-12-13 23:47:22.725973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:52.448 [2024-12-13 23:47:22.865694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.448 [2024-12-13 23:47:22.866202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.448 [2024-12-13 23:47:22.866252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.707 23:47:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.707 23:47:23 -- common/autotest_common.sh@862 -- # return 0 00:13:52.707 23:47:23 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:52.707 23:47:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.707 23:47:23 -- common/autotest_common.sh@10 -- # set +x 00:13:52.707 [2024-12-13 23:47:23.404027] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.707 23:47:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.707 23:47:23 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:52.707 23:47:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.707 23:47:23 -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 malloc0 00:13:52.965 23:47:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 23:47:23 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:52.965 23:47:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.965 23:47:23 -- common/autotest_common.sh@10 -- # set +x 00:13:52.965 [2024-12-13 23:47:23.495595] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:52.965 [2024-12-13 23:47:23.495629] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:52.965 [2024-12-13 23:47:23.495636] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:52.965 [2024-12-13 23:47:23.503532] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:52.965 [2024-12-13 23:47:23.503549] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:52.965 [2024-12-13 23:47:23.503606] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:52.965 1 00:13:52.965 23:47:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.965 23:47:23 -- ublk/ublk_recovery.sh@52 -- # wait 69692 00:14:19.503 [2024-12-13 23:47:47.943515] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:19.503 [2024-12-13 23:47:47.950999] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:19.503 [2024-12-13 23:47:47.957703] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:19.503 [2024-12-13 23:47:47.957727] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:46.040 00:14:46.040 fio_test: (groupid=0, jobs=1): err= 0: pid=69695: Fri Dec 13 23:48:12 2024 00:14:46.040 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(3208MiB/60002msec) 00:14:46.040 slat (nsec): min=1154, max=3282.4k, avg=5435.04, stdev=3926.42 00:14:46.040 clat (usec): min=1043, max=30450k, avg=4479.22, stdev=260215.78 00:14:46.040 lat (usec): min=1052, max=30450k, avg=4484.65, stdev=260215.78 00:14:46.040 clat percentiles (usec): 00:14:46.040 | 1.00th=[ 1827], 5.00th=[ 1909], 10.00th=[ 1958], 20.00th=[ 2024], 00:14:46.040 | 30.00th=[ 2073], 40.00th=[ 2114], 50.00th=[ 2114], 60.00th=[ 2147], 00:14:46.040 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3392], 00:14:46.040 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 7898], 99.95th=[ 8848], 00:14:46.040 | 99.99th=[13042] 00:14:46.040 bw ( KiB/s): min=31504, max=124696, per=100.00%, avg=109665.29, stdev=16468.77, samples=59 00:14:46.040 iops : min= 7876, max=31174, avg=27416.32, stdev=4117.19, samples=59 00:14:46.040 write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(3205MiB/60002msec); 0 zone resets 00:14:46.040 slat (nsec): min=1212, max=193844, avg=5638.84, stdev=1529.04 00:14:46.040 clat (usec): min=1187, max=30450k, avg=4865.16, stdev=277183.45 00:14:46.040 lat (usec): min=1197, max=30450k, avg=4870.80, stdev=277183.44 00:14:46.040 clat percentiles (usec): 00:14:46.040 | 1.00th=[ 1893], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2114], 00:14:46.040 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2212], 60.00th=[ 2245], 00:14:46.040 | 70.00th=[ 2278], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3326], 00:14:46.040 | 99.00th=[ 5669], 99.50th=[ 5997], 99.90th=[ 8029], 99.95th=[ 9110], 00:14:46.040 | 99.99th=[13173] 00:14:46.040 bw ( KiB/s): min=31496, max=124496, per=100.00%, avg=109518.64, stdev=16643.09, samples=59 00:14:46.040 iops : min= 7874, max=31124, avg=27379.66, stdev=4160.77, samples=59 00:14:46.040 lat (msec) : 2=10.90%, 4=85.63%, 10=3.43%, 20=0.04%, >=2000=0.01% 00:14:46.040 cpu : usr=3.13%, sys=15.46%, ctx=53784, majf=0, minf=14 00:14:46.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:46.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:46.040 issued rwts: total=821347,820383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:46.040 00:14:46.040 Run status group 0 (all jobs): 00:14:46.040 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=3208MiB (3364MB), run=60002-60002msec 00:14:46.040 WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=3205MiB (3360MB), run=60002-60002msec 00:14:46.040 00:14:46.040 Disk stats (read/write): 00:14:46.040 ublkb1: ios=818370/817407, merge=0/0, ticks=3626504/3868528, in_queue=7495033, util=99.91% 00:14:46.040 23:48:12 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:46.040 23:48:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.040 23:48:12 -- common/autotest_common.sh@10 -- # set +x 00:14:46.040 [2024-12-13 23:48:12.769055] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:46.040 [2024-12-13 23:48:12.801619] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:46.040 [2024-12-13 23:48:12.801765] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:46.040 [2024-12-13 23:48:12.811544] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:46.040 [2024-12-13 23:48:12.811639] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:46.040 [2024-12-13 23:48:12.811648] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:46.040 23:48:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.040 23:48:12 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:46.040 23:48:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.040 23:48:12 -- common/autotest_common.sh@10 -- # set +x 00:14:46.040 [2024-12-13 23:48:12.826567] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:46.040 [2024-12-13 23:48:12.834499] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:46.040 [2024-12-13 23:48:12.834528] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:46.040 23:48:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.040 23:48:12 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:46.040 23:48:12 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:46.040 23:48:12 -- ublk/ublk_recovery.sh@14 -- # killprocess 69803 00:14:46.040 23:48:12 -- common/autotest_common.sh@936 -- # '[' -z 69803 ']' 00:14:46.040 23:48:12 -- common/autotest_common.sh@940 -- # kill -0 69803 00:14:46.040 23:48:12 -- common/autotest_common.sh@941 -- # uname 00:14:46.040 23:48:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.040 23:48:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69803 00:14:46.040 23:48:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.040 23:48:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.040 23:48:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69803' 00:14:46.040 killing process with pid 69803 00:14:46.040 23:48:12 -- common/autotest_common.sh@955 -- # kill 69803 00:14:46.040 23:48:12 -- common/autotest_common.sh@960 -- # wait 69803 00:14:46.040 [2024-12-13 23:48:13.925727] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:46.040 [2024-12-13 23:48:13.925775] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:46.040 00:14:46.040 real 1m5.017s 00:14:46.040 user 1m47.928s 00:14:46.040 sys 0m22.466s 00:14:46.040 23:48:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.040 ************************************ 00:14:46.040 23:48:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.040 END TEST ublk_recovery 00:14:46.040 ************************************ 00:14:46.040 23:48:14 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:46.040 23:48:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.040 23:48:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.040 23:48:14 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:46.040 23:48:14 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:46.040 23:48:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.040 23:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.040 23:48:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.040 ************************************ 00:14:46.040 START TEST ftl 00:14:46.040 ************************************ 00:14:46.040 23:48:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:46.040 * Looking for test storage... 00:14:46.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:46.040 23:48:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:46.040 23:48:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:46.040 23:48:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:46.040 23:48:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:46.040 23:48:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:46.040 23:48:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:46.040 23:48:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:46.040 23:48:14 -- scripts/common.sh@335 -- # IFS=.-: 00:14:46.040 23:48:14 -- scripts/common.sh@335 -- # read -ra ver1 00:14:46.040 23:48:14 -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.040 23:48:14 -- scripts/common.sh@336 -- # read -ra ver2 00:14:46.040 23:48:14 -- scripts/common.sh@337 -- # local 'op=<' 00:14:46.040 23:48:14 -- scripts/common.sh@339 -- # ver1_l=2 00:14:46.040 23:48:14 -- scripts/common.sh@340 -- # ver2_l=1 00:14:46.040 23:48:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:46.040 23:48:14 -- scripts/common.sh@343 -- # case "$op" in 00:14:46.040 23:48:14 -- scripts/common.sh@344 -- # : 1 00:14:46.040 23:48:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:46.040 23:48:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.040 23:48:14 -- scripts/common.sh@364 -- # decimal 1 00:14:46.040 23:48:14 -- scripts/common.sh@352 -- # local d=1 00:14:46.040 23:48:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.040 23:48:14 -- scripts/common.sh@354 -- # echo 1 00:14:46.040 23:48:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:46.040 23:48:14 -- scripts/common.sh@365 -- # decimal 2 00:14:46.040 23:48:14 -- scripts/common.sh@352 -- # local d=2 00:14:46.040 23:48:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.040 23:48:14 -- scripts/common.sh@354 -- # echo 2 00:14:46.040 23:48:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:46.040 23:48:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:46.040 23:48:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:46.040 23:48:14 -- scripts/common.sh@367 -- # return 0 00:14:46.040 23:48:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.040 23:48:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:46.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.040 --rc genhtml_branch_coverage=1 00:14:46.040 --rc genhtml_function_coverage=1 00:14:46.040 --rc genhtml_legend=1 00:14:46.040 --rc geninfo_all_blocks=1 00:14:46.041 --rc geninfo_unexecuted_blocks=1 00:14:46.041 00:14:46.041 ' 00:14:46.041 23:48:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:46.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.041 --rc genhtml_branch_coverage=1 00:14:46.041 --rc genhtml_function_coverage=1 00:14:46.041 --rc genhtml_legend=1 00:14:46.041 --rc geninfo_all_blocks=1 00:14:46.041 --rc geninfo_unexecuted_blocks=1 00:14:46.041 00:14:46.041 ' 00:14:46.041 23:48:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:46.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.041 --rc genhtml_branch_coverage=1 00:14:46.041 --rc genhtml_function_coverage=1 00:14:46.041 --rc genhtml_legend=1 00:14:46.041 --rc geninfo_all_blocks=1 00:14:46.041 --rc geninfo_unexecuted_blocks=1 00:14:46.041 00:14:46.041 ' 00:14:46.041 23:48:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:46.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.041 --rc genhtml_branch_coverage=1 00:14:46.041 --rc genhtml_function_coverage=1 00:14:46.041 --rc genhtml_legend=1 00:14:46.041 --rc geninfo_all_blocks=1 00:14:46.041 --rc geninfo_unexecuted_blocks=1 00:14:46.041 00:14:46.041 ' 00:14:46.041 23:48:14 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:46.041 23:48:14 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:46.041 23:48:14 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:46.041 23:48:14 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:46.041 23:48:14 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:46.041 23:48:14 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:46.041 23:48:14 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.041 23:48:14 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:46.041 23:48:14 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:46.041 23:48:14 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.041 23:48:14 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.041 23:48:14 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:46.041 23:48:14 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:46.041 23:48:14 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:46.041 23:48:14 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:46.041 23:48:14 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:46.041 23:48:14 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:46.041 23:48:14 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.041 23:48:14 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.041 23:48:14 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:46.041 23:48:14 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:46.041 23:48:14 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:46.041 23:48:14 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:46.041 23:48:14 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:46.041 23:48:14 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:46.041 23:48:14 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:46.041 23:48:14 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:46.041 23:48:14 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:46.041 23:48:14 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:46.041 23:48:14 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.041 23:48:14 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:46.041 23:48:14 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:46.041 23:48:14 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:46.041 23:48:14 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:46.041 23:48:14 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:46.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:46.041 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:46.041 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:46.041 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:46.041 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:46.041 23:48:15 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70616 00:14:46.041 23:48:15 -- ftl/ftl.sh@38 -- # waitforlisten 70616 00:14:46.041 23:48:15 -- common/autotest_common.sh@829 -- # '[' -z 70616 ']' 00:14:46.041 23:48:15 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:46.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.041 23:48:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.041 23:48:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.041 23:48:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.041 23:48:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.041 23:48:15 -- common/autotest_common.sh@10 -- # set +x 00:14:46.041 [2024-12-13 23:48:15.585643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:46.041 [2024-12-13 23:48:15.585802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70616 ] 00:14:46.041 [2024-12-13 23:48:15.743930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.041 [2024-12-13 23:48:16.015586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:46.041 [2024-12-13 23:48:16.015850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.041 23:48:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.041 23:48:16 -- common/autotest_common.sh@862 -- # return 0 00:14:46.041 23:48:16 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:46.041 23:48:16 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:46.982 23:48:17 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:46.982 23:48:17 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:47.242 23:48:17 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:47.242 23:48:17 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:47.242 23:48:17 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:47.503 23:48:18 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:47.503 23:48:18 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:47.503 23:48:18 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:47.503 23:48:18 -- ftl/ftl.sh@50 -- # break 00:14:47.503 23:48:18 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:47.503 23:48:18 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:47.503 23:48:18 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:47.503 23:48:18 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:47.503 23:48:18 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:47.503 23:48:18 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:47.503 23:48:18 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:47.503 23:48:18 -- ftl/ftl.sh@63 -- # break 00:14:47.503 23:48:18 -- ftl/ftl.sh@66 -- # killprocess 70616 00:14:47.503 23:48:18 -- common/autotest_common.sh@936 -- # '[' -z 70616 ']' 00:14:47.504 23:48:18 -- common/autotest_common.sh@940 -- # kill -0 70616 00:14:47.504 23:48:18 -- common/autotest_common.sh@941 -- # uname 00:14:47.504 23:48:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.504 23:48:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70616 00:14:47.504 23:48:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:47.504 killing process with pid 70616 00:14:47.504 23:48:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:47.504 23:48:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70616' 00:14:47.504 23:48:18 -- common/autotest_common.sh@955 -- # kill 70616 00:14:47.504 23:48:18 -- common/autotest_common.sh@960 -- # wait 70616 00:14:48.890 23:48:19 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:48.890 23:48:19 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:48.890 23:48:19 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:48.890 23:48:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:48.890 23:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.890 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:14:48.890 ************************************ 00:14:48.890 START TEST ftl_fio_basic 00:14:48.890 ************************************ 00:14:48.890 23:48:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:48.890 * Looking for test storage... 00:14:48.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.890 23:48:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:48.890 23:48:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:48.890 23:48:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:49.152 23:48:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:49.152 23:48:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:49.152 23:48:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:49.152 23:48:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:49.152 23:48:19 -- scripts/common.sh@335 -- # IFS=.-: 00:14:49.152 23:48:19 -- scripts/common.sh@335 -- # read -ra ver1 00:14:49.152 23:48:19 -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.152 23:48:19 -- scripts/common.sh@336 -- # read -ra ver2 00:14:49.152 23:48:19 -- scripts/common.sh@337 -- # local 'op=<' 00:14:49.152 23:48:19 -- scripts/common.sh@339 -- # ver1_l=2 00:14:49.152 23:48:19 -- scripts/common.sh@340 -- # ver2_l=1 00:14:49.152 23:48:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:49.152 23:48:19 -- scripts/common.sh@343 -- # case "$op" in 00:14:49.152 23:48:19 -- scripts/common.sh@344 -- # : 1 00:14:49.152 23:48:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:49.152 23:48:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.152 23:48:19 -- scripts/common.sh@364 -- # decimal 1 00:14:49.152 23:48:19 -- scripts/common.sh@352 -- # local d=1 00:14:49.152 23:48:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.152 23:48:19 -- scripts/common.sh@354 -- # echo 1 00:14:49.152 23:48:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:49.152 23:48:19 -- scripts/common.sh@365 -- # decimal 2 00:14:49.152 23:48:19 -- scripts/common.sh@352 -- # local d=2 00:14:49.152 23:48:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.152 23:48:19 -- scripts/common.sh@354 -- # echo 2 00:14:49.152 23:48:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:49.152 23:48:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:49.152 23:48:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:49.152 23:48:19 -- scripts/common.sh@367 -- # return 0 00:14:49.152 23:48:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.152 23:48:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:49.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.152 --rc genhtml_branch_coverage=1 00:14:49.152 --rc genhtml_function_coverage=1 00:14:49.152 --rc genhtml_legend=1 00:14:49.152 --rc geninfo_all_blocks=1 00:14:49.152 --rc geninfo_unexecuted_blocks=1 00:14:49.152 00:14:49.152 ' 00:14:49.152 23:48:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:49.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.152 --rc genhtml_branch_coverage=1 00:14:49.152 --rc genhtml_function_coverage=1 00:14:49.152 --rc genhtml_legend=1 00:14:49.152 --rc geninfo_all_blocks=1 00:14:49.152 --rc geninfo_unexecuted_blocks=1 00:14:49.152 00:14:49.152 ' 00:14:49.152 23:48:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:49.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.152 --rc genhtml_branch_coverage=1 00:14:49.152 --rc genhtml_function_coverage=1 00:14:49.152 --rc genhtml_legend=1 00:14:49.152 --rc geninfo_all_blocks=1 00:14:49.152 --rc geninfo_unexecuted_blocks=1 00:14:49.152 00:14:49.152 ' 00:14:49.152 23:48:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:49.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.152 --rc genhtml_branch_coverage=1 00:14:49.152 --rc genhtml_function_coverage=1 00:14:49.152 --rc genhtml_legend=1 00:14:49.152 --rc geninfo_all_blocks=1 00:14:49.152 --rc geninfo_unexecuted_blocks=1 00:14:49.152 00:14:49.152 ' 00:14:49.152 23:48:19 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:49.152 23:48:19 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:49.152 23:48:19 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:49.152 23:48:19 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:49.152 23:48:19 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:49.152 23:48:19 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:49.152 23:48:19 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.152 23:48:19 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:49.152 23:48:19 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:49.152 23:48:19 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:49.152 23:48:19 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:49.152 23:48:19 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:49.152 23:48:19 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:49.152 23:48:19 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:49.152 23:48:19 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:49.152 23:48:19 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:49.152 23:48:19 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:49.152 23:48:19 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:49.152 23:48:19 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:49.152 23:48:19 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:49.152 23:48:19 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:49.152 23:48:19 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:49.152 23:48:19 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:49.152 23:48:19 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:49.152 23:48:19 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:49.152 23:48:19 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:49.152 23:48:19 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:49.152 23:48:19 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:49.152 23:48:19 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:49.152 23:48:19 -- ftl/fio.sh@11 -- # declare -A suite 00:14:49.152 23:48:19 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:49.152 23:48:19 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:49.152 23:48:19 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:49.152 23:48:19 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.152 23:48:19 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:49.152 23:48:19 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:49.152 23:48:19 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:49.152 23:48:19 -- ftl/fio.sh@26 -- # uuid= 00:14:49.152 23:48:19 -- ftl/fio.sh@27 -- # timeout=240 00:14:49.152 23:48:19 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:49.152 23:48:19 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:49.152 23:48:19 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:49.152 23:48:19 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:49.152 23:48:19 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:49.152 23:48:19 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:49.152 23:48:19 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:49.152 23:48:19 -- ftl/fio.sh@45 -- # svcpid=70747 00:14:49.152 23:48:19 -- ftl/fio.sh@46 -- # waitforlisten 70747 00:14:49.152 23:48:19 -- common/autotest_common.sh@829 -- # '[' -z 70747 ']' 00:14:49.152 23:48:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.152 23:48:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.152 23:48:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.152 23:48:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.152 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.152 23:48:19 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:49.152 [2024-12-13 23:48:19.752392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:49.152 [2024-12-13 23:48:19.752521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70747 ] 00:14:49.414 [2024-12-13 23:48:19.900337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:49.414 [2024-12-13 23:48:20.073880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.414 [2024-12-13 23:48:20.074287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.414 [2024-12-13 23:48:20.074726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.414 [2024-12-13 23:48:20.074749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.882 23:48:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.883 23:48:21 -- common/autotest_common.sh@862 -- # return 0 00:14:50.883 23:48:21 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:50.883 23:48:21 -- ftl/common.sh@54 -- # local name=nvme0 00:14:50.883 23:48:21 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:50.883 23:48:21 -- ftl/common.sh@56 -- # local size=103424 00:14:50.883 23:48:21 -- ftl/common.sh@59 -- # local base_bdev 00:14:50.883 23:48:21 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:50.883 23:48:21 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:50.883 23:48:21 -- ftl/common.sh@62 -- # local base_size 00:14:50.883 23:48:21 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:50.883 23:48:21 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:50.883 23:48:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:50.883 23:48:21 -- common/autotest_common.sh@1369 -- # local bs 00:14:50.883 23:48:21 -- common/autotest_common.sh@1370 -- # local nb 00:14:50.883 23:48:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:51.141 23:48:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:51.141 { 00:14:51.141 "name": "nvme0n1", 00:14:51.141 "aliases": [ 00:14:51.141 "b07c02aa-6f18-4023-a75a-f9d935801b83" 00:14:51.141 ], 00:14:51.141 "product_name": "NVMe disk", 00:14:51.141 "block_size": 4096, 00:14:51.141 "num_blocks": 1310720, 00:14:51.141 "uuid": "b07c02aa-6f18-4023-a75a-f9d935801b83", 00:14:51.141 "assigned_rate_limits": { 00:14:51.141 "rw_ios_per_sec": 0, 00:14:51.141 "rw_mbytes_per_sec": 0, 00:14:51.141 "r_mbytes_per_sec": 0, 00:14:51.141 "w_mbytes_per_sec": 0 00:14:51.141 }, 00:14:51.141 "claimed": false, 00:14:51.141 "zoned": false, 00:14:51.141 "supported_io_types": { 00:14:51.141 "read": true, 00:14:51.141 "write": true, 00:14:51.141 "unmap": true, 00:14:51.141 "write_zeroes": true, 00:14:51.141 "flush": true, 00:14:51.141 "reset": true, 00:14:51.141 "compare": true, 00:14:51.141 "compare_and_write": false, 00:14:51.141 "abort": true, 00:14:51.141 "nvme_admin": true, 00:14:51.141 "nvme_io": true 00:14:51.141 }, 00:14:51.141 "driver_specific": { 00:14:51.141 "nvme": [ 00:14:51.141 { 00:14:51.141 "pci_address": "0000:00:07.0", 00:14:51.141 "trid": { 00:14:51.141 "trtype": "PCIe", 00:14:51.141 "traddr": "0000:00:07.0" 00:14:51.141 }, 00:14:51.141 "ctrlr_data": { 00:14:51.141 "cntlid": 0, 00:14:51.141 "vendor_id": "0x1b36", 00:14:51.141 "model_number": "QEMU NVMe Ctrl", 00:14:51.141 "serial_number": "12341", 00:14:51.141 "firmware_revision": "8.0.0", 00:14:51.141 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:51.141 "oacs": { 00:14:51.141 "security": 0, 00:14:51.141 "format": 1, 00:14:51.141 "firmware": 0, 00:14:51.141 "ns_manage": 1 00:14:51.141 }, 00:14:51.141 "multi_ctrlr": false, 00:14:51.141 "ana_reporting": false 00:14:51.141 }, 00:14:51.141 "vs": { 00:14:51.141 "nvme_version": "1.4" 00:14:51.141 }, 00:14:51.141 "ns_data": { 00:14:51.141 "id": 1, 00:14:51.141 "can_share": false 00:14:51.141 } 00:14:51.141 } 00:14:51.141 ], 00:14:51.141 "mp_policy": "active_passive" 00:14:51.141 } 00:14:51.141 } 00:14:51.141 ]' 00:14:51.141 23:48:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:51.141 23:48:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:51.141 23:48:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:51.141 23:48:21 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:51.141 23:48:21 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:51.141 23:48:21 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:51.141 23:48:21 -- ftl/common.sh@63 -- # base_size=5120 00:14:51.141 23:48:21 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:51.141 23:48:21 -- ftl/common.sh@67 -- # clear_lvols 00:14:51.141 23:48:21 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:51.141 23:48:21 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:51.399 23:48:21 -- ftl/common.sh@28 -- # stores= 00:14:51.399 23:48:21 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:51.658 23:48:22 -- ftl/common.sh@68 -- # lvs=04857767-d753-439c-9ab1-e3bf8bf25b28 00:14:51.658 23:48:22 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 04857767-d753-439c-9ab1-e3bf8bf25b28 00:14:51.658 23:48:22 -- ftl/fio.sh@48 -- # split_bdev=8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:51.658 23:48:22 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:51.658 23:48:22 -- ftl/common.sh@35 -- # local name=nvc0 00:14:51.658 23:48:22 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:51.658 23:48:22 -- ftl/common.sh@37 -- # local base_bdev=8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:51.658 23:48:22 -- ftl/common.sh@38 -- # local cache_size= 00:14:51.658 23:48:22 -- ftl/common.sh@41 -- # get_bdev_size 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:51.658 23:48:22 -- common/autotest_common.sh@1367 -- # local bdev_name=8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:51.658 23:48:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:51.658 23:48:22 -- common/autotest_common.sh@1369 -- # local bs 00:14:51.658 23:48:22 -- common/autotest_common.sh@1370 -- # local nb 00:14:51.658 23:48:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:51.916 23:48:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:51.916 { 00:14:51.916 "name": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:51.916 "aliases": [ 00:14:51.916 "lvs/nvme0n1p0" 00:14:51.916 ], 00:14:51.916 "product_name": "Logical Volume", 00:14:51.916 "block_size": 4096, 00:14:51.916 "num_blocks": 26476544, 00:14:51.916 "uuid": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:51.916 "assigned_rate_limits": { 00:14:51.916 "rw_ios_per_sec": 0, 00:14:51.916 "rw_mbytes_per_sec": 0, 00:14:51.916 "r_mbytes_per_sec": 0, 00:14:51.916 "w_mbytes_per_sec": 0 00:14:51.916 }, 00:14:51.916 "claimed": false, 00:14:51.916 "zoned": false, 00:14:51.916 "supported_io_types": { 00:14:51.916 "read": true, 00:14:51.916 "write": true, 00:14:51.916 "unmap": true, 00:14:51.916 "write_zeroes": true, 00:14:51.916 "flush": false, 00:14:51.916 "reset": true, 00:14:51.916 "compare": false, 00:14:51.916 "compare_and_write": false, 00:14:51.916 "abort": false, 00:14:51.916 "nvme_admin": false, 00:14:51.916 "nvme_io": false 00:14:51.916 }, 00:14:51.916 "driver_specific": { 00:14:51.916 "lvol": { 00:14:51.916 "lvol_store_uuid": "04857767-d753-439c-9ab1-e3bf8bf25b28", 00:14:51.916 "base_bdev": "nvme0n1", 00:14:51.916 "thin_provision": true, 00:14:51.916 "snapshot": false, 00:14:51.916 "clone": false, 00:14:51.916 "esnap_clone": false 00:14:51.916 } 00:14:51.916 } 00:14:51.916 } 00:14:51.916 ]' 00:14:51.916 23:48:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:51.916 23:48:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:51.916 23:48:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:51.916 23:48:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:51.916 23:48:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:51.916 23:48:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:51.916 23:48:22 -- ftl/common.sh@41 -- # local base_size=5171 00:14:51.916 23:48:22 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:51.916 23:48:22 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:52.174 23:48:22 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:52.174 23:48:22 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:52.174 23:48:22 -- ftl/common.sh@48 -- # get_bdev_size 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:52.174 23:48:22 -- common/autotest_common.sh@1367 -- # local bdev_name=8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:52.174 23:48:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:52.174 23:48:22 -- common/autotest_common.sh@1369 -- # local bs 00:14:52.174 23:48:22 -- common/autotest_common.sh@1370 -- # local nb 00:14:52.174 23:48:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:52.431 23:48:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:52.431 { 00:14:52.431 "name": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:52.431 "aliases": [ 00:14:52.431 "lvs/nvme0n1p0" 00:14:52.431 ], 00:14:52.431 "product_name": "Logical Volume", 00:14:52.431 "block_size": 4096, 00:14:52.431 "num_blocks": 26476544, 00:14:52.431 "uuid": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:52.431 "assigned_rate_limits": { 00:14:52.431 "rw_ios_per_sec": 0, 00:14:52.431 "rw_mbytes_per_sec": 0, 00:14:52.431 "r_mbytes_per_sec": 0, 00:14:52.431 "w_mbytes_per_sec": 0 00:14:52.431 }, 00:14:52.431 "claimed": false, 00:14:52.431 "zoned": false, 00:14:52.431 "supported_io_types": { 00:14:52.431 "read": true, 00:14:52.431 "write": true, 00:14:52.431 "unmap": true, 00:14:52.431 "write_zeroes": true, 00:14:52.431 "flush": false, 00:14:52.431 "reset": true, 00:14:52.431 "compare": false, 00:14:52.432 "compare_and_write": false, 00:14:52.432 "abort": false, 00:14:52.432 "nvme_admin": false, 00:14:52.432 "nvme_io": false 00:14:52.432 }, 00:14:52.432 "driver_specific": { 00:14:52.432 "lvol": { 00:14:52.432 "lvol_store_uuid": "04857767-d753-439c-9ab1-e3bf8bf25b28", 00:14:52.432 "base_bdev": "nvme0n1", 00:14:52.432 "thin_provision": true, 00:14:52.432 "snapshot": false, 00:14:52.432 "clone": false, 00:14:52.432 "esnap_clone": false 00:14:52.432 } 00:14:52.432 } 00:14:52.432 } 00:14:52.432 ]' 00:14:52.432 23:48:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:52.432 23:48:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:52.432 23:48:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:52.432 23:48:23 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:52.432 23:48:23 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:52.432 23:48:23 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:52.432 23:48:23 -- ftl/common.sh@48 -- # cache_size=5171 00:14:52.432 23:48:23 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:52.689 23:48:23 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:52.689 23:48:23 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:52.689 23:48:23 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:52.689 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:52.689 23:48:23 -- ftl/fio.sh@56 -- # get_bdev_size 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:52.689 23:48:23 -- common/autotest_common.sh@1367 -- # local bdev_name=8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:52.689 23:48:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:52.689 23:48:23 -- common/autotest_common.sh@1369 -- # local bs 00:14:52.689 23:48:23 -- common/autotest_common.sh@1370 -- # local nb 00:14:52.689 23:48:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bf24825-f6bf-4bd9-8343-2f1b3973470d 00:14:52.947 23:48:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:52.947 { 00:14:52.947 "name": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:52.947 "aliases": [ 00:14:52.947 "lvs/nvme0n1p0" 00:14:52.947 ], 00:14:52.947 "product_name": "Logical Volume", 00:14:52.947 "block_size": 4096, 00:14:52.947 "num_blocks": 26476544, 00:14:52.947 "uuid": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:52.947 "assigned_rate_limits": { 00:14:52.947 "rw_ios_per_sec": 0, 00:14:52.947 "rw_mbytes_per_sec": 0, 00:14:52.947 "r_mbytes_per_sec": 0, 00:14:52.947 "w_mbytes_per_sec": 0 00:14:52.947 }, 00:14:52.947 "claimed": false, 00:14:52.947 "zoned": false, 00:14:52.947 "supported_io_types": { 00:14:52.947 "read": true, 00:14:52.947 "write": true, 00:14:52.947 "unmap": true, 00:14:52.947 "write_zeroes": true, 00:14:52.947 "flush": false, 00:14:52.947 "reset": true, 00:14:52.947 "compare": false, 00:14:52.947 "compare_and_write": false, 00:14:52.947 "abort": false, 00:14:52.947 "nvme_admin": false, 00:14:52.947 "nvme_io": false 00:14:52.947 }, 00:14:52.947 "driver_specific": { 00:14:52.947 "lvol": { 00:14:52.947 "lvol_store_uuid": "04857767-d753-439c-9ab1-e3bf8bf25b28", 00:14:52.947 "base_bdev": "nvme0n1", 00:14:52.947 "thin_provision": true, 00:14:52.947 "snapshot": false, 00:14:52.947 "clone": false, 00:14:52.947 "esnap_clone": false 00:14:52.947 } 00:14:52.947 } 00:14:52.947 } 00:14:52.947 ]' 00:14:52.947 23:48:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:52.947 23:48:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:52.947 23:48:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:52.947 23:48:23 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:52.947 23:48:23 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:52.947 23:48:23 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:52.947 23:48:23 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:52.947 23:48:23 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:52.947 23:48:23 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8bf24825-f6bf-4bd9-8343-2f1b3973470d -c nvc0n1p0 --l2p_dram_limit 60 00:14:53.206 [2024-12-13 23:48:23.678632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.678930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:53.206 [2024-12-13 23:48:23.678960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:53.206 [2024-12-13 23:48:23.678968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.679048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.679056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:53.206 [2024-12-13 23:48:23.679065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:14:53.206 [2024-12-13 23:48:23.679071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.679098] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:53.206 [2024-12-13 23:48:23.679739] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:53.206 [2024-12-13 23:48:23.679763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.679770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:53.206 [2024-12-13 23:48:23.679778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:14:53.206 [2024-12-13 23:48:23.679784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.679844] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a51b54a8-b7bb-423a-825c-ff6abeed776b 00:14:53.206 [2024-12-13 23:48:23.680888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.680919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:53.206 [2024-12-13 23:48:23.680927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:53.206 [2024-12-13 23:48:23.680935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.686219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.686250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:53.206 [2024-12-13 23:48:23.686258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.227 ms 00:14:53.206 [2024-12-13 23:48:23.686265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.686336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.686345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:53.206 [2024-12-13 23:48:23.686352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:14:53.206 [2024-12-13 23:48:23.686360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.686405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.686414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:53.206 [2024-12-13 23:48:23.686420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:53.206 [2024-12-13 23:48:23.686429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.686458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:53.206 [2024-12-13 23:48:23.689501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.689523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:53.206 [2024-12-13 23:48:23.689532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:14:53.206 [2024-12-13 23:48:23.689538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.206 [2024-12-13 23:48:23.689573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.206 [2024-12-13 23:48:23.689579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:53.207 [2024-12-13 23:48:23.689588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:53.207 [2024-12-13 23:48:23.689594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.207 [2024-12-13 23:48:23.689620] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:53.207 [2024-12-13 23:48:23.689718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:53.207 [2024-12-13 23:48:23.689738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:53.207 [2024-12-13 23:48:23.689746] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:53.207 [2024-12-13 23:48:23.689757] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:53.207 [2024-12-13 23:48:23.689764] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:53.207 [2024-12-13 23:48:23.689773] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:53.207 [2024-12-13 23:48:23.689778] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:53.207 [2024-12-13 23:48:23.689787] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:53.207 [2024-12-13 23:48:23.689793] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:53.207 [2024-12-13 23:48:23.689802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.207 [2024-12-13 23:48:23.689808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:53.207 [2024-12-13 23:48:23.689815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:14:53.207 [2024-12-13 23:48:23.689821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.207 [2024-12-13 23:48:23.689877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.207 [2024-12-13 23:48:23.689883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:53.207 [2024-12-13 23:48:23.689891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:53.207 [2024-12-13 23:48:23.689897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.207 [2024-12-13 23:48:23.689981] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:53.207 [2024-12-13 23:48:23.689989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:53.207 [2024-12-13 23:48:23.689997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:53.207 [2024-12-13 23:48:23.690017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:53.207 [2024-12-13 23:48:23.690035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:53.207 [2024-12-13 23:48:23.690048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:53.207 [2024-12-13 23:48:23.690053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:53.207 [2024-12-13 23:48:23.690060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:53.207 [2024-12-13 23:48:23.690065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:53.207 [2024-12-13 23:48:23.690077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:53.207 [2024-12-13 23:48:23.690082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:53.207 [2024-12-13 23:48:23.690096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:53.207 [2024-12-13 23:48:23.690102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:53.207 [2024-12-13 23:48:23.690114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:53.207 [2024-12-13 23:48:23.690119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:53.207 [2024-12-13 23:48:23.690131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690141] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:53.207 [2024-12-13 23:48:23.690147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:53.207 [2024-12-13 23:48:23.690164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:53.207 [2024-12-13 23:48:23.690182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:53.207 [2024-12-13 23:48:23.690211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:53.207 [2024-12-13 23:48:23.690223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:53.207 [2024-12-13 23:48:23.690229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:53.207 [2024-12-13 23:48:23.690234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:53.207 [2024-12-13 23:48:23.690240] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:53.207 [2024-12-13 23:48:23.690245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:53.207 [2024-12-13 23:48:23.690252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:53.207 [2024-12-13 23:48:23.690264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:53.207 [2024-12-13 23:48:23.690270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:53.207 [2024-12-13 23:48:23.690278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:53.207 [2024-12-13 23:48:23.690284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:53.207 [2024-12-13 23:48:23.690291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:53.207 [2024-12-13 23:48:23.690297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:53.207 [2024-12-13 23:48:23.690305] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:53.207 [2024-12-13 23:48:23.690312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:53.207 [2024-12-13 23:48:23.690322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:53.207 [2024-12-13 23:48:23.690327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:53.207 [2024-12-13 23:48:23.690336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:53.207 [2024-12-13 23:48:23.690342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:53.207 [2024-12-13 23:48:23.690348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:53.207 [2024-12-13 23:48:23.690354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:53.207 [2024-12-13 23:48:23.690360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:53.207 [2024-12-13 23:48:23.690365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:53.207 [2024-12-13 23:48:23.690373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:53.207 [2024-12-13 23:48:23.690378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:53.207 [2024-12-13 23:48:23.690385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:53.207 [2024-12-13 23:48:23.690391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:53.207 [2024-12-13 23:48:23.690400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:53.207 [2024-12-13 23:48:23.690405] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:53.207 [2024-12-13 23:48:23.690413] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:53.207 [2024-12-13 23:48:23.690421] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:53.207 [2024-12-13 23:48:23.690428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:53.207 [2024-12-13 23:48:23.690433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:53.207 [2024-12-13 23:48:23.690440] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:53.207 [2024-12-13 23:48:23.690447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.690454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:53.208 [2024-12-13 23:48:23.690460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:14:53.208 [2024-12-13 23:48:23.690466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.702964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.702997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:53.208 [2024-12-13 23:48:23.703007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.407 ms 00:14:53.208 [2024-12-13 23:48:23.703014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.703095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.703104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:53.208 [2024-12-13 23:48:23.703110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:14:53.208 [2024-12-13 23:48:23.703117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.728972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.729001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:53.208 [2024-12-13 23:48:23.729010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.810 ms 00:14:53.208 [2024-12-13 23:48:23.729019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.729049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.729058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:53.208 [2024-12-13 23:48:23.729066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:53.208 [2024-12-13 23:48:23.729073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.729399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.729427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:53.208 [2024-12-13 23:48:23.729436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:14:53.208 [2024-12-13 23:48:23.729443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.729555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.729566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:53.208 [2024-12-13 23:48:23.729572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:14:53.208 [2024-12-13 23:48:23.729579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.753460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.753525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:53.208 [2024-12-13 23:48:23.753543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.856 ms 00:14:53.208 [2024-12-13 23:48:23.753558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.763400] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:53.208 [2024-12-13 23:48:23.776861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.777018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:53.208 [2024-12-13 23:48:23.777034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.153 ms 00:14:53.208 [2024-12-13 23:48:23.777041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.827881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.208 [2024-12-13 23:48:23.827993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:53.208 [2024-12-13 23:48:23.828011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.803 ms 00:14:53.208 [2024-12-13 23:48:23.828017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.208 [2024-12-13 23:48:23.828058] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:53.208 [2024-12-13 23:48:23.828067] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:56.488 [2024-12-13 23:48:26.720434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.488 [2024-12-13 23:48:26.720500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:56.488 [2024-12-13 23:48:26.720518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2892.361 ms 00:14:56.488 [2024-12-13 23:48:26.720527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.488 [2024-12-13 23:48:26.720719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.488 [2024-12-13 23:48:26.720732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:56.488 [2024-12-13 23:48:26.720744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:14:56.488 [2024-12-13 23:48:26.720754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.488 [2024-12-13 23:48:26.744065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.488 [2024-12-13 23:48:26.744097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:56.488 [2024-12-13 23:48:26.744110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.251 ms 00:14:56.488 [2024-12-13 23:48:26.744118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.488 [2024-12-13 23:48:26.766838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.488 [2024-12-13 23:48:26.766866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:56.488 [2024-12-13 23:48:26.766881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.673 ms 00:14:56.488 [2024-12-13 23:48:26.766888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.488 [2024-12-13 23:48:26.767202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.488 [2024-12-13 23:48:26.767213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:56.488 [2024-12-13 23:48:26.767223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:14:56.488 [2024-12-13 23:48:26.767230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.826217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.489 [2024-12-13 23:48:26.826387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:56.489 [2024-12-13 23:48:26.826409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.943 ms 00:14:56.489 [2024-12-13 23:48:26.826417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.850574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.489 [2024-12-13 23:48:26.850603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:56.489 [2024-12-13 23:48:26.850618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:14:56.489 [2024-12-13 23:48:26.850626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.854860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.489 [2024-12-13 23:48:26.854890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:56.489 [2024-12-13 23:48:26.854903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:14:56.489 [2024-12-13 23:48:26.854910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.878177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.489 [2024-12-13 23:48:26.878213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:56.489 [2024-12-13 23:48:26.878225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.218 ms 00:14:56.489 [2024-12-13 23:48:26.878232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.878290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.489 [2024-12-13 23:48:26.878300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:56.489 [2024-12-13 23:48:26.878311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:14:56.489 [2024-12-13 23:48:26.878318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.878408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:56.489 [2024-12-13 23:48:26.878420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:56.489 [2024-12-13 23:48:26.878429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:14:56.489 [2024-12-13 23:48:26.878436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:56.489 [2024-12-13 23:48:26.879311] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3200.255 ms, result 0 00:14:56.489 { 00:14:56.489 "name": "ftl0", 00:14:56.489 "uuid": "a51b54a8-b7bb-423a-825c-ff6abeed776b" 00:14:56.489 } 00:14:56.489 23:48:26 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:56.489 23:48:26 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:56.489 23:48:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:56.489 23:48:26 -- common/autotest_common.sh@899 -- # local i 00:14:56.489 23:48:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:56.489 23:48:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:56.489 23:48:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:56.489 23:48:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:56.747 [ 00:14:56.747 { 00:14:56.747 "name": "ftl0", 00:14:56.747 "aliases": [ 00:14:56.747 "a51b54a8-b7bb-423a-825c-ff6abeed776b" 00:14:56.747 ], 00:14:56.747 "product_name": "FTL disk", 00:14:56.747 "block_size": 4096, 00:14:56.747 "num_blocks": 20971520, 00:14:56.747 "uuid": "a51b54a8-b7bb-423a-825c-ff6abeed776b", 00:14:56.747 "assigned_rate_limits": { 00:14:56.747 "rw_ios_per_sec": 0, 00:14:56.747 "rw_mbytes_per_sec": 0, 00:14:56.747 "r_mbytes_per_sec": 0, 00:14:56.747 "w_mbytes_per_sec": 0 00:14:56.747 }, 00:14:56.747 "claimed": false, 00:14:56.747 "zoned": false, 00:14:56.747 "supported_io_types": { 00:14:56.747 "read": true, 00:14:56.747 "write": true, 00:14:56.747 "unmap": true, 00:14:56.747 "write_zeroes": true, 00:14:56.747 "flush": true, 00:14:56.747 "reset": false, 00:14:56.747 "compare": false, 00:14:56.747 "compare_and_write": false, 00:14:56.747 "abort": false, 00:14:56.747 "nvme_admin": false, 00:14:56.747 "nvme_io": false 00:14:56.747 }, 00:14:56.747 "driver_specific": { 00:14:56.747 "ftl": { 00:14:56.747 "base_bdev": "8bf24825-f6bf-4bd9-8343-2f1b3973470d", 00:14:56.747 "cache": "nvc0n1p0" 00:14:56.747 } 00:14:56.747 } 00:14:56.747 } 00:14:56.747 ] 00:14:56.747 23:48:27 -- common/autotest_common.sh@905 -- # return 0 00:14:56.747 23:48:27 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:56.747 23:48:27 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:56.747 23:48:27 -- ftl/fio.sh@70 -- # echo ']}' 00:14:56.747 23:48:27 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:57.006 [2024-12-13 23:48:27.636437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.006 [2024-12-13 23:48:27.636478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:57.006 [2024-12-13 23:48:27.636506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:57.006 [2024-12-13 23:48:27.636516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.006 [2024-12-13 23:48:27.636566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:57.006 [2024-12-13 23:48:27.638987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.006 [2024-12-13 23:48:27.639013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:57.006 [2024-12-13 23:48:27.639028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:14:57.006 [2024-12-13 23:48:27.639035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.006 [2024-12-13 23:48:27.639548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.006 [2024-12-13 23:48:27.639565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:57.006 [2024-12-13 23:48:27.639575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:14:57.006 [2024-12-13 23:48:27.639583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.006 [2024-12-13 23:48:27.642838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.006 [2024-12-13 23:48:27.642859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:57.006 [2024-12-13 23:48:27.642869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:14:57.006 [2024-12-13 23:48:27.642878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.006 [2024-12-13 23:48:27.649142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.006 [2024-12-13 23:48:27.649167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:57.006 [2024-12-13 23:48:27.649177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:14:57.006 [2024-12-13 23:48:27.649184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.006 [2024-12-13 23:48:27.673159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.007 [2024-12-13 23:48:27.673187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:57.007 [2024-12-13 23:48:27.673199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.875 ms 00:14:57.007 [2024-12-13 23:48:27.673206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.007 [2024-12-13 23:48:27.687998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.007 [2024-12-13 23:48:27.688129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:57.007 [2024-12-13 23:48:27.688162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.745 ms 00:14:57.007 [2024-12-13 23:48:27.688170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.007 [2024-12-13 23:48:27.688369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.007 [2024-12-13 23:48:27.688382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:57.007 [2024-12-13 23:48:27.688392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:14:57.007 [2024-12-13 23:48:27.688400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.007 [2024-12-13 23:48:27.711342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.007 [2024-12-13 23:48:27.711453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:57.007 [2024-12-13 23:48:27.711477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.913 ms 00:14:57.007 [2024-12-13 23:48:27.711499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.007 [2024-12-13 23:48:27.734579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.007 [2024-12-13 23:48:27.734607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:57.007 [2024-12-13 23:48:27.734619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:14:57.007 [2024-12-13 23:48:27.734626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.266 [2024-12-13 23:48:27.757061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.266 [2024-12-13 23:48:27.757172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:57.266 [2024-12-13 23:48:27.757190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.388 ms 00:14:57.266 [2024-12-13 23:48:27.757197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.266 [2024-12-13 23:48:27.779854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.266 [2024-12-13 23:48:27.779953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:57.266 [2024-12-13 23:48:27.780004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.562 ms 00:14:57.266 [2024-12-13 23:48:27.780027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.266 [2024-12-13 23:48:27.780097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:57.266 [2024-12-13 23:48:27.780127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:57.266 [2024-12-13 23:48:27.780915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.780948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.781986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.782972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.783978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:57.267 [2024-12-13 23:48:27.784432] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:57.267 [2024-12-13 23:48:27.784457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a51b54a8-b7bb-423a-825c-ff6abeed776b 00:14:57.267 [2024-12-13 23:48:27.784496] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:57.267 [2024-12-13 23:48:27.784518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:57.267 [2024-12-13 23:48:27.784573] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:57.267 [2024-12-13 23:48:27.784601] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:57.267 [2024-12-13 23:48:27.784621] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:57.267 [2024-12-13 23:48:27.784642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:57.267 [2024-12-13 23:48:27.784705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:57.267 [2024-12-13 23:48:27.784729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:57.267 [2024-12-13 23:48:27.784746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:57.267 [2024-12-13 23:48:27.784769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.267 [2024-12-13 23:48:27.784817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:57.268 [2024-12-13 23:48:27.784831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:14:57.268 [2024-12-13 23:48:27.784839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.797335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.268 [2024-12-13 23:48:27.797362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:57.268 [2024-12-13 23:48:27.797375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.453 ms 00:14:57.268 [2024-12-13 23:48:27.797383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.797631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.268 [2024-12-13 23:48:27.797642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:57.268 [2024-12-13 23:48:27.797652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:14:57.268 [2024-12-13 23:48:27.797659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.841760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.841791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:57.268 [2024-12-13 23:48:27.841803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.841813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.841878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.841886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:57.268 [2024-12-13 23:48:27.841895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.841902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.841981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.841991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:57.268 [2024-12-13 23:48:27.842000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.842007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.842040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.842048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:57.268 [2024-12-13 23:48:27.842057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.842064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.925339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.925509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:57.268 [2024-12-13 23:48:27.925528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.925536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:57.268 [2024-12-13 23:48:27.954166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:57.268 [2024-12-13 23:48:27.954254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:57.268 [2024-12-13 23:48:27.954351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:57.268 [2024-12-13 23:48:27.954502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:57.268 [2024-12-13 23:48:27.954580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:57.268 [2024-12-13 23:48:27.954651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:57.268 [2024-12-13 23:48:27.954728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:57.268 [2024-12-13 23:48:27.954737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:57.268 [2024-12-13 23:48:27.954744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.268 [2024-12-13 23:48:27.954903] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 318.434 ms, result 0 00:14:57.268 true 00:14:57.268 23:48:27 -- ftl/fio.sh@75 -- # killprocess 70747 00:14:57.268 23:48:27 -- common/autotest_common.sh@936 -- # '[' -z 70747 ']' 00:14:57.268 23:48:27 -- common/autotest_common.sh@940 -- # kill -0 70747 00:14:57.268 23:48:27 -- common/autotest_common.sh@941 -- # uname 00:14:57.268 23:48:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.268 23:48:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70747 00:14:57.527 23:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:57.527 23:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:57.527 23:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70747' 00:14:57.527 killing process with pid 70747 00:14:57.527 23:48:28 -- common/autotest_common.sh@955 -- # kill 70747 00:14:57.527 23:48:28 -- common/autotest_common.sh@960 -- # wait 70747 00:15:04.105 23:48:33 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:04.105 23:48:33 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:04.105 23:48:33 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:04.105 23:48:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.105 23:48:33 -- common/autotest_common.sh@10 -- # set +x 00:15:04.105 23:48:33 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:04.105 23:48:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:04.105 23:48:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:04.105 23:48:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.105 23:48:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:04.105 23:48:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.105 23:48:33 -- common/autotest_common.sh@1330 -- # shift 00:15:04.105 23:48:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:04.105 23:48:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.105 23:48:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.105 23:48:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:04.105 23:48:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:04.105 23:48:33 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.105 23:48:33 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.105 23:48:33 -- common/autotest_common.sh@1336 -- # break 00:15:04.105 23:48:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.105 23:48:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:04.105 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:04.105 fio-3.35 00:15:04.105 Starting 1 thread 00:15:07.391 00:15:07.391 test: (groupid=0, jobs=1): err= 0: pid=70983: Fri Dec 13 23:48:37 2024 00:15:07.391 read: IOPS=1429, BW=94.9MiB/s (99.5MB/s)(255MiB/2682msec) 00:15:07.391 slat (nsec): min=2984, max=25000, avg=4942.08, stdev=1661.72 00:15:07.391 clat (usec): min=250, max=674, avg=316.18, stdev=28.77 00:15:07.391 lat (usec): min=255, max=681, avg=321.12, stdev=29.74 00:15:07.391 clat percentiles (usec): 00:15:07.391 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 302], 00:15:07.391 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:15:07.391 | 70.00th=[ 318], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 383], 00:15:07.391 | 99.00th=[ 429], 99.50th=[ 433], 99.90th=[ 490], 99.95th=[ 660], 00:15:07.391 | 99.99th=[ 676] 00:15:07.391 write: IOPS=1439, BW=95.6MiB/s (100MB/s)(256MiB/2679msec); 0 zone resets 00:15:07.391 slat (nsec): min=13458, max=72863, avg=17681.66, stdev=2641.61 00:15:07.391 clat (usec): min=269, max=1739, avg=345.18, stdev=46.80 00:15:07.391 lat (usec): min=284, max=1770, avg=362.86, stdev=47.40 00:15:07.391 clat percentiles (usec): 00:15:07.391 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 326], 00:15:07.391 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 343], 00:15:07.391 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 400], 00:15:07.391 | 99.00th=[ 523], 99.50th=[ 578], 99.90th=[ 865], 99.95th=[ 1074], 00:15:07.391 | 99.99th=[ 1745] 00:15:07.391 bw ( KiB/s): min=96288, max=100504, per=99.68%, avg=97566.40, stdev=1823.62, samples=5 00:15:07.391 iops : min= 1416, max= 1478, avg=1434.80, stdev=26.82, samples=5 00:15:07.391 lat (usec) : 500=99.22%, 750=0.69%, 1000=0.05% 00:15:07.391 lat (msec) : 2=0.04% 00:15:07.391 cpu : usr=99.29%, sys=0.11%, ctx=4, majf=0, minf=1318 00:15:07.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.391 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.391 00:15:07.391 Run status group 0 (all jobs): 00:15:07.391 READ: bw=94.9MiB/s (99.5MB/s), 94.9MiB/s-94.9MiB/s (99.5MB/s-99.5MB/s), io=255MiB (267MB), run=2682-2682msec 00:15:07.391 WRITE: bw=95.6MiB/s (100MB/s), 95.6MiB/s-95.6MiB/s (100MB/s-100MB/s), io=256MiB (269MB), run=2679-2679msec 00:15:08.776 ----------------------------------------------------- 00:15:08.776 Suppressions used: 00:15:08.776 count bytes template 00:15:08.776 1 5 /usr/src/fio/parse.c 00:15:08.776 1 8 libtcmalloc_minimal.so 00:15:08.776 1 904 libcrypto.so 00:15:08.776 ----------------------------------------------------- 00:15:08.776 00:15:08.776 23:48:39 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:08.776 23:48:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.776 23:48:39 -- common/autotest_common.sh@10 -- # set +x 00:15:08.776 23:48:39 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:08.776 23:48:39 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:08.776 23:48:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.776 23:48:39 -- common/autotest_common.sh@10 -- # set +x 00:15:08.776 23:48:39 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:08.776 23:48:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:08.776 23:48:39 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:08.777 23:48:39 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.777 23:48:39 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:08.777 23:48:39 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.777 23:48:39 -- common/autotest_common.sh@1330 -- # shift 00:15:08.777 23:48:39 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:08.777 23:48:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.777 23:48:39 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:08.777 23:48:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.777 23:48:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:08.777 23:48:39 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.777 23:48:39 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.777 23:48:39 -- common/autotest_common.sh@1336 -- # break 00:15:08.777 23:48:39 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.777 23:48:39 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:09.037 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:09.037 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:09.037 fio-3.35 00:15:09.037 Starting 2 threads 00:15:30.978 00:15:30.978 first_half: (groupid=0, jobs=1): err= 0: pid=71068: Fri Dec 13 23:49:01 2024 00:15:30.978 read: IOPS=3106, BW=12.1MiB/s (12.7MB/s)(256MiB/21075msec) 00:15:30.978 slat (nsec): min=2989, max=36384, avg=5131.43, stdev=726.60 00:15:30.978 clat (usec): min=508, max=241217, avg=34407.84, stdev=22296.82 00:15:30.978 lat (usec): min=512, max=241221, avg=34412.97, stdev=22296.80 00:15:30.978 clat percentiles (msec): 00:15:30.978 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 29], 00:15:30.978 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:30.978 | 70.00th=[ 31], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 68], 00:15:30.978 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 218], 00:15:30.978 | 99.99th=[ 239] 00:15:30.978 write: IOPS=3115, BW=12.2MiB/s (12.8MB/s)(256MiB/21038msec); 0 zone resets 00:15:30.978 slat (usec): min=3, max=1793, avg= 6.18, stdev= 7.68 00:15:30.978 clat (usec): min=343, max=39434, avg=6755.39, stdev=7198.95 00:15:30.978 lat (usec): min=350, max=39440, avg=6761.58, stdev=7199.70 00:15:30.978 clat percentiles (usec): 00:15:30.978 | 1.00th=[ 701], 5.00th=[ 824], 10.00th=[ 1106], 20.00th=[ 2737], 00:15:30.978 | 30.00th=[ 3359], 40.00th=[ 3949], 50.00th=[ 4686], 60.00th=[ 5145], 00:15:30.978 | 70.00th=[ 5669], 80.00th=[ 6915], 90.00th=[20579], 95.00th=[24773], 00:15:30.978 | 99.00th=[31327], 99.50th=[33817], 99.90th=[37487], 99.95th=[38536], 00:15:30.978 | 99.99th=[39060] 00:15:30.978 bw ( KiB/s): min= 3288, max=48200, per=99.50%, avg=24796.19, stdev=13706.01, samples=21 00:15:30.978 iops : min= 822, max=12050, avg=6199.05, stdev=3426.50, samples=21 00:15:30.978 lat (usec) : 500=0.03%, 750=1.16%, 1000=3.31% 00:15:30.978 lat (msec) : 2=2.51%, 4=13.35%, 10=23.83%, 20=2.17%, 50=50.40% 00:15:30.978 lat (msec) : 100=1.54%, 250=1.70% 00:15:30.978 cpu : usr=99.41%, sys=0.14%, ctx=183, majf=0, minf=5532 00:15:30.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:30.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.978 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.978 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.978 second_half: (groupid=0, jobs=1): err= 0: pid=71069: Fri Dec 13 23:49:01 2024 00:15:30.978 read: IOPS=3132, BW=12.2MiB/s (12.8MB/s)(256MiB/20909msec) 00:15:30.978 slat (nsec): min=3012, max=43768, avg=5153.79, stdev=816.77 00:15:30.978 clat (msec): min=7, max=223, avg=34.63, stdev=20.11 00:15:30.978 lat (msec): min=7, max=223, avg=34.64, stdev=20.11 00:15:30.978 clat percentiles (msec): 00:15:30.978 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 29], 00:15:30.978 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:30.978 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 64], 00:15:30.978 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 169], 00:15:30.978 | 99.99th=[ 213] 00:15:30.978 write: IOPS=3149, BW=12.3MiB/s (12.9MB/s)(256MiB/20807msec); 0 zone resets 00:15:30.978 slat (usec): min=3, max=1661, avg= 6.40, stdev=15.95 00:15:30.978 clat (usec): min=327, max=40897, avg=6211.83, stdev=5557.48 00:15:30.978 lat (usec): min=335, max=40907, avg=6218.22, stdev=5558.91 00:15:30.978 clat percentiles (usec): 00:15:30.978 | 1.00th=[ 840], 5.00th=[ 1663], 10.00th=[ 2376], 20.00th=[ 2999], 00:15:30.978 | 30.00th=[ 3621], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5276], 00:15:30.978 | 70.00th=[ 5604], 80.00th=[ 6259], 90.00th=[10683], 95.00th=[22152], 00:15:30.978 | 99.00th=[26084], 99.50th=[27919], 99.90th=[34341], 99.95th=[35390], 00:15:30.978 | 99.99th=[36963] 00:15:30.978 bw ( KiB/s): min= 936, max=46072, per=100.00%, avg=28921.33, stdev=15492.51, samples=18 00:15:30.978 iops : min= 234, max=11518, avg=7230.33, stdev=3873.13, samples=18 00:15:30.978 lat (usec) : 500=0.05%, 750=0.25%, 1000=0.63% 00:15:30.978 lat (msec) : 2=2.23%, 4=14.47%, 10=26.76%, 20=2.36%, 50=50.31% 00:15:30.978 lat (msec) : 100=1.30%, 250=1.66% 00:15:30.978 cpu : usr=99.32%, sys=0.20%, ctx=57, majf=0, minf=5587 00:15:30.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:30.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.978 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.978 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.978 00:15:30.978 Run status group 0 (all jobs): 00:15:30.978 READ: bw=24.3MiB/s (25.5MB/s), 12.1MiB/s-12.2MiB/s (12.7MB/s-12.8MB/s), io=512MiB (536MB), run=20909-21075msec 00:15:30.978 WRITE: bw=24.3MiB/s (25.5MB/s), 12.2MiB/s-12.3MiB/s (12.8MB/s-12.9MB/s), io=512MiB (537MB), run=20807-21038msec 00:15:33.523 ----------------------------------------------------- 00:15:33.523 Suppressions used: 00:15:33.523 count bytes template 00:15:33.523 2 10 /usr/src/fio/parse.c 00:15:33.523 3 288 /usr/src/fio/iolog.c 00:15:33.523 1 8 libtcmalloc_minimal.so 00:15:33.523 1 904 libcrypto.so 00:15:33.523 ----------------------------------------------------- 00:15:33.523 00:15:33.523 23:49:03 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:33.523 23:49:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.523 23:49:03 -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 23:49:03 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:33.523 23:49:03 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:33.523 23:49:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.523 23:49:03 -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 23:49:03 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:33.523 23:49:03 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:33.523 23:49:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:33.523 23:49:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.523 23:49:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:33.523 23:49:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.523 23:49:03 -- common/autotest_common.sh@1330 -- # shift 00:15:33.523 23:49:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:33.523 23:49:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.523 23:49:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:33.523 23:49:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.523 23:49:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:33.523 23:49:04 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.523 23:49:04 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.523 23:49:04 -- common/autotest_common.sh@1336 -- # break 00:15:33.523 23:49:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.523 23:49:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:33.523 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:33.523 fio-3.35 00:15:33.523 Starting 1 thread 00:15:48.523 00:15:48.523 test: (groupid=0, jobs=1): err= 0: pid=71361: Fri Dec 13 23:49:18 2024 00:15:48.523 read: IOPS=8110, BW=31.7MiB/s (33.2MB/s)(255MiB/8039msec) 00:15:48.523 slat (nsec): min=2943, max=45888, avg=4922.29, stdev=1177.09 00:15:48.523 clat (usec): min=499, max=38098, avg=15771.55, stdev=2496.63 00:15:48.523 lat (usec): min=503, max=38102, avg=15776.47, stdev=2496.79 00:15:48.523 clat percentiles (usec): 00:15:48.523 | 1.00th=[13173], 5.00th=[13435], 10.00th=[13698], 20.00th=[14615], 00:15:48.523 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:15:48.523 | 70.00th=[15533], 80.00th=[16057], 90.00th=[19006], 95.00th=[21890], 00:15:48.523 | 99.00th=[24773], 99.50th=[26346], 99.90th=[31065], 99.95th=[34341], 00:15:48.523 | 99.99th=[37487] 00:15:48.523 write: IOPS=12.3k, BW=48.1MiB/s (50.5MB/s)(256MiB/5320msec); 0 zone resets 00:15:48.523 slat (usec): min=4, max=679, avg= 7.42, stdev= 5.02 00:15:48.523 clat (usec): min=451, max=42903, avg=10350.52, stdev=10155.39 00:15:48.523 lat (usec): min=459, max=42908, avg=10357.94, stdev=10156.04 00:15:48.523 clat percentiles (usec): 00:15:48.523 | 1.00th=[ 594], 5.00th=[ 717], 10.00th=[ 791], 20.00th=[ 898], 00:15:48.523 | 30.00th=[ 1045], 40.00th=[ 1401], 50.00th=[ 5211], 60.00th=[13435], 00:15:48.523 | 70.00th=[17171], 80.00th=[20317], 90.00th=[25297], 95.00th=[27132], 00:15:48.523 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38536], 99.95th=[41157], 00:15:48.523 | 99.99th=[42730] 00:15:48.523 bw ( KiB/s): min=25756, max=82896, per=96.72%, avg=47657.82, stdev=20248.41, samples=11 00:15:48.523 iops : min= 6439, max=20724, avg=11914.45, stdev=5062.10, samples=11 00:15:48.523 lat (usec) : 500=0.03%, 750=3.48%, 1000=10.34% 00:15:48.523 lat (msec) : 2=6.91%, 4=1.06%, 10=5.31%, 20=58.36%, 50=14.51% 00:15:48.523 cpu : usr=99.17%, sys=0.25%, ctx=39, majf=0, minf=5567 00:15:48.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:48.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.523 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.523 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.523 00:15:48.524 Run status group 0 (all jobs): 00:15:48.524 READ: bw=31.7MiB/s (33.2MB/s), 31.7MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=255MiB (267MB), run=8039-8039msec 00:15:48.524 WRITE: bw=48.1MiB/s (50.5MB/s), 48.1MiB/s-48.1MiB/s (50.5MB/s-50.5MB/s), io=256MiB (268MB), run=5320-5320msec 00:15:49.910 ----------------------------------------------------- 00:15:49.910 Suppressions used: 00:15:49.910 count bytes template 00:15:49.910 1 5 /usr/src/fio/parse.c 00:15:49.910 2 192 /usr/src/fio/iolog.c 00:15:49.910 1 8 libtcmalloc_minimal.so 00:15:49.910 1 904 libcrypto.so 00:15:49.910 ----------------------------------------------------- 00:15:49.910 00:15:49.910 23:49:20 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:49.910 23:49:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.910 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:15:50.172 23:49:20 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:50.172 Remove shared memory files 00:15:50.172 23:49:20 -- ftl/fio.sh@85 -- # remove_shm 00:15:50.172 23:49:20 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:50.172 23:49:20 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:50.172 23:49:20 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:50.172 23:49:20 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56188 /dev/shm/spdk_tgt_trace.pid69644 00:15:50.172 23:49:20 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:50.172 23:49:20 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:50.172 ************************************ 00:15:50.172 END TEST ftl_fio_basic 00:15:50.172 ************************************ 00:15:50.172 00:15:50.172 real 1m1.193s 00:15:50.172 user 2m9.646s 00:15:50.172 sys 0m2.981s 00:15:50.172 23:49:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:50.172 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:15:50.172 23:49:20 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:50.172 23:49:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:50.172 23:49:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.172 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:15:50.172 ************************************ 00:15:50.172 START TEST ftl_bdevperf 00:15:50.172 ************************************ 00:15:50.173 23:49:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:50.173 * Looking for test storage... 00:15:50.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.173 23:49:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:50.173 23:49:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:50.173 23:49:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:50.173 23:49:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:50.173 23:49:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:50.173 23:49:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:50.173 23:49:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:50.173 23:49:20 -- scripts/common.sh@335 -- # IFS=.-: 00:15:50.173 23:49:20 -- scripts/common.sh@335 -- # read -ra ver1 00:15:50.173 23:49:20 -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.173 23:49:20 -- scripts/common.sh@336 -- # read -ra ver2 00:15:50.173 23:49:20 -- scripts/common.sh@337 -- # local 'op=<' 00:15:50.173 23:49:20 -- scripts/common.sh@339 -- # ver1_l=2 00:15:50.173 23:49:20 -- scripts/common.sh@340 -- # ver2_l=1 00:15:50.173 23:49:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:50.173 23:49:20 -- scripts/common.sh@343 -- # case "$op" in 00:15:50.173 23:49:20 -- scripts/common.sh@344 -- # : 1 00:15:50.173 23:49:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:50.173 23:49:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.173 23:49:20 -- scripts/common.sh@364 -- # decimal 1 00:15:50.173 23:49:20 -- scripts/common.sh@352 -- # local d=1 00:15:50.173 23:49:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.173 23:49:20 -- scripts/common.sh@354 -- # echo 1 00:15:50.173 23:49:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:50.173 23:49:20 -- scripts/common.sh@365 -- # decimal 2 00:15:50.173 23:49:20 -- scripts/common.sh@352 -- # local d=2 00:15:50.173 23:49:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.173 23:49:20 -- scripts/common.sh@354 -- # echo 2 00:15:50.173 23:49:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:50.173 23:49:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:50.173 23:49:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:50.173 23:49:20 -- scripts/common.sh@367 -- # return 0 00:15:50.173 23:49:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.173 23:49:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.173 --rc genhtml_branch_coverage=1 00:15:50.173 --rc genhtml_function_coverage=1 00:15:50.173 --rc genhtml_legend=1 00:15:50.173 --rc geninfo_all_blocks=1 00:15:50.173 --rc geninfo_unexecuted_blocks=1 00:15:50.173 00:15:50.173 ' 00:15:50.173 23:49:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.173 --rc genhtml_branch_coverage=1 00:15:50.173 --rc genhtml_function_coverage=1 00:15:50.173 --rc genhtml_legend=1 00:15:50.173 --rc geninfo_all_blocks=1 00:15:50.173 --rc geninfo_unexecuted_blocks=1 00:15:50.173 00:15:50.173 ' 00:15:50.173 23:49:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.173 --rc genhtml_branch_coverage=1 00:15:50.173 --rc genhtml_function_coverage=1 00:15:50.173 --rc genhtml_legend=1 00:15:50.173 --rc geninfo_all_blocks=1 00:15:50.173 --rc geninfo_unexecuted_blocks=1 00:15:50.173 00:15:50.173 ' 00:15:50.173 23:49:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:50.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.173 --rc genhtml_branch_coverage=1 00:15:50.173 --rc genhtml_function_coverage=1 00:15:50.173 --rc genhtml_legend=1 00:15:50.173 --rc geninfo_all_blocks=1 00:15:50.173 --rc geninfo_unexecuted_blocks=1 00:15:50.173 00:15:50.173 ' 00:15:50.173 23:49:20 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:50.434 23:49:20 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:50.434 23:49:20 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.434 23:49:20 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.434 23:49:20 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:50.434 23:49:20 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:50.434 23:49:20 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.434 23:49:20 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:50.434 23:49:20 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:50.434 23:49:20 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.434 23:49:20 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.434 23:49:20 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:50.434 23:49:20 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:50.434 23:49:20 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:50.434 23:49:20 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:50.434 23:49:20 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:50.434 23:49:20 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:50.434 23:49:20 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.434 23:49:20 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.434 23:49:20 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:50.434 23:49:20 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:50.434 23:49:20 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:50.434 23:49:20 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:50.434 23:49:20 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:50.434 23:49:20 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:50.434 23:49:20 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:50.434 23:49:20 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:50.434 23:49:20 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.434 23:49:20 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:50.434 23:49:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.434 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71600 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:50.434 23:49:20 -- ftl/bdevperf.sh@22 -- # waitforlisten 71600 00:15:50.434 23:49:20 -- common/autotest_common.sh@829 -- # '[' -z 71600 ']' 00:15:50.434 23:49:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.434 23:49:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.434 23:49:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.434 23:49:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.434 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:15:50.434 [2024-12-13 23:49:20.998205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:50.434 [2024-12-13 23:49:20.998571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71600 ] 00:15:50.434 [2024-12-13 23:49:21.153389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.696 [2024-12-13 23:49:21.379389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.268 23:49:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.268 23:49:21 -- common/autotest_common.sh@862 -- # return 0 00:15:51.268 23:49:21 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:51.268 23:49:21 -- ftl/common.sh@54 -- # local name=nvme0 00:15:51.268 23:49:21 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:51.268 23:49:21 -- ftl/common.sh@56 -- # local size=103424 00:15:51.268 23:49:21 -- ftl/common.sh@59 -- # local base_bdev 00:15:51.268 23:49:21 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:51.527 23:49:22 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:51.527 23:49:22 -- ftl/common.sh@62 -- # local base_size 00:15:51.527 23:49:22 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:51.527 23:49:22 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:51.527 23:49:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:51.527 23:49:22 -- common/autotest_common.sh@1369 -- # local bs 00:15:51.527 23:49:22 -- common/autotest_common.sh@1370 -- # local nb 00:15:51.527 23:49:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:51.788 23:49:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:51.788 { 00:15:51.788 "name": "nvme0n1", 00:15:51.788 "aliases": [ 00:15:51.788 "a93a97f8-33c0-49d9-b52f-aff94d29f1b0" 00:15:51.788 ], 00:15:51.788 "product_name": "NVMe disk", 00:15:51.788 "block_size": 4096, 00:15:51.788 "num_blocks": 1310720, 00:15:51.788 "uuid": "a93a97f8-33c0-49d9-b52f-aff94d29f1b0", 00:15:51.788 "assigned_rate_limits": { 00:15:51.788 "rw_ios_per_sec": 0, 00:15:51.788 "rw_mbytes_per_sec": 0, 00:15:51.788 "r_mbytes_per_sec": 0, 00:15:51.788 "w_mbytes_per_sec": 0 00:15:51.788 }, 00:15:51.788 "claimed": true, 00:15:51.788 "claim_type": "read_many_write_one", 00:15:51.788 "zoned": false, 00:15:51.788 "supported_io_types": { 00:15:51.788 "read": true, 00:15:51.788 "write": true, 00:15:51.788 "unmap": true, 00:15:51.788 "write_zeroes": true, 00:15:51.788 "flush": true, 00:15:51.788 "reset": true, 00:15:51.788 "compare": true, 00:15:51.788 "compare_and_write": false, 00:15:51.788 "abort": true, 00:15:51.788 "nvme_admin": true, 00:15:51.788 "nvme_io": true 00:15:51.788 }, 00:15:51.788 "driver_specific": { 00:15:51.788 "nvme": [ 00:15:51.788 { 00:15:51.788 "pci_address": "0000:00:07.0", 00:15:51.788 "trid": { 00:15:51.788 "trtype": "PCIe", 00:15:51.788 "traddr": "0000:00:07.0" 00:15:51.788 }, 00:15:51.788 "ctrlr_data": { 00:15:51.788 "cntlid": 0, 00:15:51.788 "vendor_id": "0x1b36", 00:15:51.788 "model_number": "QEMU NVMe Ctrl", 00:15:51.788 "serial_number": "12341", 00:15:51.788 "firmware_revision": "8.0.0", 00:15:51.788 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:51.788 "oacs": { 00:15:51.788 "security": 0, 00:15:51.788 "format": 1, 00:15:51.788 "firmware": 0, 00:15:51.788 "ns_manage": 1 00:15:51.788 }, 00:15:51.788 "multi_ctrlr": false, 00:15:51.788 "ana_reporting": false 00:15:51.788 }, 00:15:51.788 "vs": { 00:15:51.788 "nvme_version": "1.4" 00:15:51.788 }, 00:15:51.788 "ns_data": { 00:15:51.788 "id": 1, 00:15:51.788 "can_share": false 00:15:51.788 } 00:15:51.788 } 00:15:51.788 ], 00:15:51.788 "mp_policy": "active_passive" 00:15:51.788 } 00:15:51.788 } 00:15:51.788 ]' 00:15:51.788 23:49:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:51.788 23:49:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:51.788 23:49:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:51.788 23:49:22 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:51.788 23:49:22 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:51.788 23:49:22 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:51.788 23:49:22 -- ftl/common.sh@63 -- # base_size=5120 00:15:51.788 23:49:22 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:51.788 23:49:22 -- ftl/common.sh@67 -- # clear_lvols 00:15:51.788 23:49:22 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:51.788 23:49:22 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:51.788 23:49:22 -- ftl/common.sh@28 -- # stores=04857767-d753-439c-9ab1-e3bf8bf25b28 00:15:51.788 23:49:22 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:51.788 23:49:22 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04857767-d753-439c-9ab1-e3bf8bf25b28 00:15:52.049 23:49:22 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:52.310 23:49:22 -- ftl/common.sh@68 -- # lvs=7a63e27b-c04b-44aa-ac1b-868b5a3a9fed 00:15:52.310 23:49:22 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7a63e27b-c04b-44aa-ac1b-868b5a3a9fed 00:15:52.570 23:49:23 -- ftl/bdevperf.sh@23 -- # split_bdev=61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.570 23:49:23 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.570 23:49:23 -- ftl/common.sh@35 -- # local name=nvc0 00:15:52.570 23:49:23 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:52.570 23:49:23 -- ftl/common.sh@37 -- # local base_bdev=61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.570 23:49:23 -- ftl/common.sh@38 -- # local cache_size= 00:15:52.570 23:49:23 -- ftl/common.sh@41 -- # get_bdev_size 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.570 23:49:23 -- common/autotest_common.sh@1367 -- # local bdev_name=61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.570 23:49:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:52.570 23:49:23 -- common/autotest_common.sh@1369 -- # local bs 00:15:52.570 23:49:23 -- common/autotest_common.sh@1370 -- # local nb 00:15:52.570 23:49:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.570 23:49:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:52.570 { 00:15:52.570 "name": "61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb", 00:15:52.570 "aliases": [ 00:15:52.570 "lvs/nvme0n1p0" 00:15:52.570 ], 00:15:52.570 "product_name": "Logical Volume", 00:15:52.570 "block_size": 4096, 00:15:52.570 "num_blocks": 26476544, 00:15:52.570 "uuid": "61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb", 00:15:52.570 "assigned_rate_limits": { 00:15:52.570 "rw_ios_per_sec": 0, 00:15:52.570 "rw_mbytes_per_sec": 0, 00:15:52.570 "r_mbytes_per_sec": 0, 00:15:52.570 "w_mbytes_per_sec": 0 00:15:52.570 }, 00:15:52.570 "claimed": false, 00:15:52.570 "zoned": false, 00:15:52.570 "supported_io_types": { 00:15:52.570 "read": true, 00:15:52.570 "write": true, 00:15:52.570 "unmap": true, 00:15:52.570 "write_zeroes": true, 00:15:52.570 "flush": false, 00:15:52.570 "reset": true, 00:15:52.570 "compare": false, 00:15:52.570 "compare_and_write": false, 00:15:52.570 "abort": false, 00:15:52.570 "nvme_admin": false, 00:15:52.570 "nvme_io": false 00:15:52.570 }, 00:15:52.570 "driver_specific": { 00:15:52.570 "lvol": { 00:15:52.570 "lvol_store_uuid": "7a63e27b-c04b-44aa-ac1b-868b5a3a9fed", 00:15:52.570 "base_bdev": "nvme0n1", 00:15:52.570 "thin_provision": true, 00:15:52.570 "snapshot": false, 00:15:52.570 "clone": false, 00:15:52.570 "esnap_clone": false 00:15:52.570 } 00:15:52.570 } 00:15:52.570 } 00:15:52.570 ]' 00:15:52.570 23:49:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:52.570 23:49:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:52.570 23:49:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:52.570 23:49:23 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:52.570 23:49:23 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:52.570 23:49:23 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:52.570 23:49:23 -- ftl/common.sh@41 -- # local base_size=5171 00:15:52.570 23:49:23 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:52.570 23:49:23 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:52.831 23:49:23 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:52.831 23:49:23 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:52.831 23:49:23 -- ftl/common.sh@48 -- # get_bdev_size 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.831 23:49:23 -- common/autotest_common.sh@1367 -- # local bdev_name=61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:52.831 23:49:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:52.831 23:49:23 -- common/autotest_common.sh@1369 -- # local bs 00:15:52.831 23:49:23 -- common/autotest_common.sh@1370 -- # local nb 00:15:52.831 23:49:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:53.092 23:49:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:53.092 { 00:15:53.092 "name": "61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb", 00:15:53.092 "aliases": [ 00:15:53.092 "lvs/nvme0n1p0" 00:15:53.092 ], 00:15:53.092 "product_name": "Logical Volume", 00:15:53.092 "block_size": 4096, 00:15:53.092 "num_blocks": 26476544, 00:15:53.092 "uuid": "61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb", 00:15:53.092 "assigned_rate_limits": { 00:15:53.092 "rw_ios_per_sec": 0, 00:15:53.092 "rw_mbytes_per_sec": 0, 00:15:53.092 "r_mbytes_per_sec": 0, 00:15:53.092 "w_mbytes_per_sec": 0 00:15:53.092 }, 00:15:53.092 "claimed": false, 00:15:53.092 "zoned": false, 00:15:53.092 "supported_io_types": { 00:15:53.092 "read": true, 00:15:53.092 "write": true, 00:15:53.092 "unmap": true, 00:15:53.092 "write_zeroes": true, 00:15:53.092 "flush": false, 00:15:53.092 "reset": true, 00:15:53.092 "compare": false, 00:15:53.092 "compare_and_write": false, 00:15:53.092 "abort": false, 00:15:53.092 "nvme_admin": false, 00:15:53.092 "nvme_io": false 00:15:53.092 }, 00:15:53.092 "driver_specific": { 00:15:53.092 "lvol": { 00:15:53.092 "lvol_store_uuid": "7a63e27b-c04b-44aa-ac1b-868b5a3a9fed", 00:15:53.092 "base_bdev": "nvme0n1", 00:15:53.092 "thin_provision": true, 00:15:53.092 "snapshot": false, 00:15:53.092 "clone": false, 00:15:53.092 "esnap_clone": false 00:15:53.092 } 00:15:53.092 } 00:15:53.092 } 00:15:53.092 ]' 00:15:53.092 23:49:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:53.092 23:49:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:53.092 23:49:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:53.092 23:49:23 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:53.092 23:49:23 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:53.092 23:49:23 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:53.092 23:49:23 -- ftl/common.sh@48 -- # cache_size=5171 00:15:53.092 23:49:23 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:53.352 23:49:23 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:53.352 23:49:23 -- ftl/bdevperf.sh@26 -- # get_bdev_size 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:53.352 23:49:23 -- common/autotest_common.sh@1367 -- # local bdev_name=61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:53.352 23:49:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:53.352 23:49:23 -- common/autotest_common.sh@1369 -- # local bs 00:15:53.352 23:49:23 -- common/autotest_common.sh@1370 -- # local nb 00:15:53.352 23:49:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb 00:15:53.613 23:49:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:53.614 { 00:15:53.614 "name": "61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb", 00:15:53.614 "aliases": [ 00:15:53.614 "lvs/nvme0n1p0" 00:15:53.614 ], 00:15:53.614 "product_name": "Logical Volume", 00:15:53.614 "block_size": 4096, 00:15:53.614 "num_blocks": 26476544, 00:15:53.614 "uuid": "61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb", 00:15:53.614 "assigned_rate_limits": { 00:15:53.614 "rw_ios_per_sec": 0, 00:15:53.614 "rw_mbytes_per_sec": 0, 00:15:53.614 "r_mbytes_per_sec": 0, 00:15:53.614 "w_mbytes_per_sec": 0 00:15:53.614 }, 00:15:53.614 "claimed": false, 00:15:53.614 "zoned": false, 00:15:53.614 "supported_io_types": { 00:15:53.614 "read": true, 00:15:53.614 "write": true, 00:15:53.614 "unmap": true, 00:15:53.614 "write_zeroes": true, 00:15:53.614 "flush": false, 00:15:53.614 "reset": true, 00:15:53.614 "compare": false, 00:15:53.614 "compare_and_write": false, 00:15:53.614 "abort": false, 00:15:53.614 "nvme_admin": false, 00:15:53.614 "nvme_io": false 00:15:53.614 }, 00:15:53.614 "driver_specific": { 00:15:53.614 "lvol": { 00:15:53.614 "lvol_store_uuid": "7a63e27b-c04b-44aa-ac1b-868b5a3a9fed", 00:15:53.614 "base_bdev": "nvme0n1", 00:15:53.614 "thin_provision": true, 00:15:53.614 "snapshot": false, 00:15:53.614 "clone": false, 00:15:53.614 "esnap_clone": false 00:15:53.614 } 00:15:53.614 } 00:15:53.614 } 00:15:53.614 ]' 00:15:53.614 23:49:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:53.614 23:49:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:53.614 23:49:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:53.614 23:49:24 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:53.614 23:49:24 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:53.614 23:49:24 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:53.614 23:49:24 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:53.614 23:49:24 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 61fec6ff-6b2f-4bc1-bdae-e0f8fafc0bcb -c nvc0n1p0 --l2p_dram_limit 20 00:15:53.876 [2024-12-13 23:49:24.362234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.876 [2024-12-13 23:49:24.362275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:53.876 [2024-12-13 23:49:24.362288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:53.876 [2024-12-13 23:49:24.362295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.876 [2024-12-13 23:49:24.362337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.876 [2024-12-13 23:49:24.362345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:53.876 [2024-12-13 23:49:24.362353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:53.876 [2024-12-13 23:49:24.362359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.876 [2024-12-13 23:49:24.362374] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:53.876 [2024-12-13 23:49:24.363081] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:53.876 [2024-12-13 23:49:24.363169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.876 [2024-12-13 23:49:24.363206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:53.876 [2024-12-13 23:49:24.363226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:15:53.876 [2024-12-13 23:49:24.363242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.876 [2024-12-13 23:49:24.363604] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fa1b444c-5d17-48f1-ac05-5550c47c2071 00:15:53.876 [2024-12-13 23:49:24.365193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.876 [2024-12-13 23:49:24.365272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:53.876 [2024-12-13 23:49:24.365312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:15:53.877 [2024-12-13 23:49:24.365332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.372158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.372243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:53.877 [2024-12-13 23:49:24.372282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.786 ms 00:15:53.877 [2024-12-13 23:49:24.372300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.372378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.372398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:53.877 [2024-12-13 23:49:24.372414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:53.877 [2024-12-13 23:49:24.372433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.372494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.372564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:53.877 [2024-12-13 23:49:24.372586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:53.877 [2024-12-13 23:49:24.372602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.372631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:53.877 [2024-12-13 23:49:24.375968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.376048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:53.877 [2024-12-13 23:49:24.376088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:15:53.877 [2024-12-13 23:49:24.376106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.376145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.376162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:53.877 [2024-12-13 23:49:24.376170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:53.877 [2024-12-13 23:49:24.376176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.376189] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:53.877 [2024-12-13 23:49:24.376285] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:53.877 [2024-12-13 23:49:24.376299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:53.877 [2024-12-13 23:49:24.376307] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:53.877 [2024-12-13 23:49:24.376317] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376324] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376332] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:53.877 [2024-12-13 23:49:24.376338] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:53.877 [2024-12-13 23:49:24.376347] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:53.877 [2024-12-13 23:49:24.376353] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:53.877 [2024-12-13 23:49:24.376360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.376366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:53.877 [2024-12-13 23:49:24.376374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:15:53.877 [2024-12-13 23:49:24.376380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.376428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.877 [2024-12-13 23:49:24.376434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:53.877 [2024-12-13 23:49:24.376441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:53.877 [2024-12-13 23:49:24.376447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.877 [2024-12-13 23:49:24.376527] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:53.877 [2024-12-13 23:49:24.376536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:53.877 [2024-12-13 23:49:24.376545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:53.877 [2024-12-13 23:49:24.376571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:53.877 [2024-12-13 23:49:24.376589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:53.877 [2024-12-13 23:49:24.376603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:53.877 [2024-12-13 23:49:24.376608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:53.877 [2024-12-13 23:49:24.376615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:53.877 [2024-12-13 23:49:24.376620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:53.877 [2024-12-13 23:49:24.376626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:53.877 [2024-12-13 23:49:24.376632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:53.877 [2024-12-13 23:49:24.376650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:53.877 [2024-12-13 23:49:24.376658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:53.877 [2024-12-13 23:49:24.376669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:53.877 [2024-12-13 23:49:24.376675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376681] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:53.877 [2024-12-13 23:49:24.376686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:53.877 [2024-12-13 23:49:24.376704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:53.877 [2024-12-13 23:49:24.376720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:53.877 [2024-12-13 23:49:24.376739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:53.877 [2024-12-13 23:49:24.376755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:53.877 [2024-12-13 23:49:24.376766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:53.877 [2024-12-13 23:49:24.376772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:53.877 [2024-12-13 23:49:24.376778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:53.877 [2024-12-13 23:49:24.376784] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:53.877 [2024-12-13 23:49:24.376790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:53.877 [2024-12-13 23:49:24.376797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.877 [2024-12-13 23:49:24.376868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:53.877 [2024-12-13 23:49:24.376874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:53.877 [2024-12-13 23:49:24.376880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:53.877 [2024-12-13 23:49:24.376885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:53.877 [2024-12-13 23:49:24.376893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:53.877 [2024-12-13 23:49:24.376900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:53.877 [2024-12-13 23:49:24.376910] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:53.877 [2024-12-13 23:49:24.376918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:53.877 [2024-12-13 23:49:24.376928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:53.877 [2024-12-13 23:49:24.376933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:53.877 [2024-12-13 23:49:24.376940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:53.877 [2024-12-13 23:49:24.376946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:53.877 [2024-12-13 23:49:24.376953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:53.877 [2024-12-13 23:49:24.376958] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:53.877 [2024-12-13 23:49:24.376965] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:53.877 [2024-12-13 23:49:24.376970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:53.877 [2024-12-13 23:49:24.376977] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:53.877 [2024-12-13 23:49:24.376982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:53.877 [2024-12-13 23:49:24.376989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:53.878 [2024-12-13 23:49:24.376996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:53.878 [2024-12-13 23:49:24.377004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:53.878 [2024-12-13 23:49:24.377009] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:53.878 [2024-12-13 23:49:24.377016] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:53.878 [2024-12-13 23:49:24.377022] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:53.878 [2024-12-13 23:49:24.377028] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:53.878 [2024-12-13 23:49:24.377034] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:53.878 [2024-12-13 23:49:24.377041] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:53.878 [2024-12-13 23:49:24.377046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.377053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:53.878 [2024-12-13 23:49:24.377059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:15:53.878 [2024-12-13 23:49:24.377066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.391043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.391136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:53.878 [2024-12-13 23:49:24.391179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.951 ms 00:15:53.878 [2024-12-13 23:49:24.391200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.391279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.391324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:53.878 [2024-12-13 23:49:24.391346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:53.878 [2024-12-13 23:49:24.391366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.436311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.436419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:53.878 [2024-12-13 23:49:24.436465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.881 ms 00:15:53.878 [2024-12-13 23:49:24.436501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.436553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.436579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:53.878 [2024-12-13 23:49:24.436596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:53.878 [2024-12-13 23:49:24.436612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.437024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.437099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:53.878 [2024-12-13 23:49:24.437137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:15:53.878 [2024-12-13 23:49:24.437157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.437278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.437321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:53.878 [2024-12-13 23:49:24.437361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:15:53.878 [2024-12-13 23:49:24.437381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.449909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.449995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:53.878 [2024-12-13 23:49:24.450035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.488 ms 00:15:53.878 [2024-12-13 23:49:24.450055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.460348] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:53.878 [2024-12-13 23:49:24.465790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.465867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:53.878 [2024-12-13 23:49:24.465913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:15:53.878 [2024-12-13 23:49:24.465931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.553452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.878 [2024-12-13 23:49:24.553607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:53.878 [2024-12-13 23:49:24.553671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.489 ms 00:15:53.878 [2024-12-13 23:49:24.553697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.878 [2024-12-13 23:49:24.553762] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:53.878 [2024-12-13 23:49:24.553832] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:59.168 [2024-12-13 23:49:28.948572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:28.948772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:59.168 [2024-12-13 23:49:28.948822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4394.779 ms 00:15:59.168 [2024-12-13 23:49:28.948846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:28.949113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:28.949280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:59.168 [2024-12-13 23:49:28.949346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:15:59.168 [2024-12-13 23:49:28.949372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:28.976512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:28.976725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:59.168 [2024-12-13 23:49:28.976995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.062 ms 00:15:59.168 [2024-12-13 23:49:28.977027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.002858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.003042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:59.168 [2024-12-13 23:49:29.003142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.770 ms 00:15:59.168 [2024-12-13 23:49:29.003165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.003586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.003634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:59.168 [2024-12-13 23:49:29.003660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:15:59.168 [2024-12-13 23:49:29.003773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.077819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.078001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:59.168 [2024-12-13 23:49:29.078341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.967 ms 00:15:59.168 [2024-12-13 23:49:29.078371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.107207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.107378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:59.168 [2024-12-13 23:49:29.107442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.775 ms 00:15:59.168 [2024-12-13 23:49:29.107455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.109123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.109294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:59.168 [2024-12-13 23:49:29.109320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:15:59.168 [2024-12-13 23:49:29.109333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.136254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.136429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:59.168 [2024-12-13 23:49:29.136455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.858 ms 00:15:59.168 [2024-12-13 23:49:29.136464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.136529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.136540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:59.168 [2024-12-13 23:49:29.136557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:59.168 [2024-12-13 23:49:29.136564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.136685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.168 [2024-12-13 23:49:29.136698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:59.168 [2024-12-13 23:49:29.136711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:59.168 [2024-12-13 23:49:29.136720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.168 [2024-12-13 23:49:29.138061] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4775.199 ms, result 0 00:15:59.168 { 00:15:59.168 "name": "ftl0", 00:15:59.168 "uuid": "fa1b444c-5d17-48f1-ac05-5550c47c2071" 00:15:59.168 } 00:15:59.168 23:49:29 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:59.168 23:49:29 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:59.168 23:49:29 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:59.168 23:49:29 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:59.168 [2024-12-13 23:49:29.466069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:59.168 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:59.168 Zero copy mechanism will not be used. 00:15:59.168 Running I/O for 4 seconds... 00:16:03.376 00:16:03.376 Latency(us) 00:16:03.376 [2024-12-13T23:49:34.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.376 [2024-12-13T23:49:34.108Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:03.376 ftl0 : 4.00 748.50 49.70 0.00 0.00 1419.61 516.73 2142.52 00:16:03.376 [2024-12-13T23:49:34.108Z] =================================================================================================================== 00:16:03.376 [2024-12-13T23:49:34.108Z] Total : 748.50 49.70 0.00 0.00 1419.61 516.73 2142.52 00:16:03.376 [2024-12-13 23:49:33.475846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:03.376 0 00:16:03.376 23:49:33 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:03.376 [2024-12-13 23:49:33.588027] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:03.376 Running I/O for 4 seconds... 00:16:07.587 00:16:07.587 Latency(us) 00:16:07.587 [2024-12-13T23:49:38.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.587 [2024-12-13T23:49:38.319Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.587 ftl0 : 4.03 5854.91 22.87 0.00 0.00 21779.96 384.39 45572.73 00:16:07.587 [2024-12-13T23:49:38.319Z] =================================================================================================================== 00:16:07.587 [2024-12-13T23:49:38.319Z] Total : 5854.91 22.87 0.00 0.00 21779.96 0.00 45572.73 00:16:07.587 0 00:16:07.587 [2024-12-13 23:49:37.623512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:07.587 23:49:37 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:07.587 [2024-12-13 23:49:37.736124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:07.587 Running I/O for 4 seconds... 00:16:11.798 00:16:11.798 Latency(us) 00:16:11.798 [2024-12-13T23:49:42.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.798 [2024-12-13T23:49:42.530Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.798 Verification LBA range: start 0x0 length 0x1400000 00:16:11.798 ftl0 : 4.02 9767.75 38.16 0.00 0.00 13062.20 185.90 26214.40 00:16:11.798 [2024-12-13T23:49:42.530Z] =================================================================================================================== 00:16:11.798 [2024-12-13T23:49:42.530Z] Total : 9767.75 38.16 0.00 0.00 13062.20 0.00 26214.40 00:16:11.798 [2024-12-13 23:49:41.767107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:11.798 0 00:16:11.798 23:49:41 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:11.798 [2024-12-13 23:49:41.970445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:41.970673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:11.798 [2024-12-13 23:49:41.970755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:11.798 [2024-12-13 23:49:41.970783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:41.970830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:11.798 [2024-12-13 23:49:41.974070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:41.974243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:11.798 [2024-12-13 23:49:41.974264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:16:11.798 [2024-12-13 23:49:41.974279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:41.977344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:41.977530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:11.798 [2024-12-13 23:49:41.977552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:16:11.798 [2024-12-13 23:49:41.977564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.200739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.200935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:11.798 [2024-12-13 23:49:42.200960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 223.151 ms 00:16:11.798 [2024-12-13 23:49:42.200971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.207105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.207148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:11.798 [2024-12-13 23:49:42.207161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:16:11.798 [2024-12-13 23:49:42.207172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.233349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.233408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:11.798 [2024-12-13 23:49:42.233421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.096 ms 00:16:11.798 [2024-12-13 23:49:42.233436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.252831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.252886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:11.798 [2024-12-13 23:49:42.252900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.335 ms 00:16:11.798 [2024-12-13 23:49:42.252911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.253061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.253076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:11.798 [2024-12-13 23:49:42.253086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:11.798 [2024-12-13 23:49:42.253096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.279514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.279564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:11.798 [2024-12-13 23:49:42.279588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.401 ms 00:16:11.798 [2024-12-13 23:49:42.279598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.305208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.305257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:11.798 [2024-12-13 23:49:42.305269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.565 ms 00:16:11.798 [2024-12-13 23:49:42.305282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.330735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.330922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:11.798 [2024-12-13 23:49:42.330943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.408 ms 00:16:11.798 [2024-12-13 23:49:42.330952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.351172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.798 [2024-12-13 23:49:42.351214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:11.798 [2024-12-13 23:49:42.351224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.132 ms 00:16:11.798 [2024-12-13 23:49:42.351233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.798 [2024-12-13 23:49:42.351271] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:11.798 [2024-12-13 23:49:42.351288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:11.798 [2024-12-13 23:49:42.351383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.351994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:11.799 [2024-12-13 23:49:42.352076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:11.800 [2024-12-13 23:49:42.352083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:11.800 [2024-12-13 23:49:42.352089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:11.800 [2024-12-13 23:49:42.352104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:11.800 [2024-12-13 23:49:42.352111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa1b444c-5d17-48f1-ac05-5550c47c2071 00:16:11.800 [2024-12-13 23:49:42.352121] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:11.800 [2024-12-13 23:49:42.352128] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:11.800 [2024-12-13 23:49:42.352135] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:11.800 [2024-12-13 23:49:42.352142] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:11.800 [2024-12-13 23:49:42.352150] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:11.800 [2024-12-13 23:49:42.352160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:11.800 [2024-12-13 23:49:42.352168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:11.800 [2024-12-13 23:49:42.352172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:11.800 [2024-12-13 23:49:42.352179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:11.800 [2024-12-13 23:49:42.352185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.800 [2024-12-13 23:49:42.352193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:11.800 [2024-12-13 23:49:42.352200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:16:11.800 [2024-12-13 23:49:42.352209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.363424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.800 [2024-12-13 23:49:42.363467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:11.800 [2024-12-13 23:49:42.363477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.184 ms 00:16:11.800 [2024-12-13 23:49:42.363506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.363710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.800 [2024-12-13 23:49:42.363724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:11.800 [2024-12-13 23:49:42.363731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:16:11.800 [2024-12-13 23:49:42.363740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.398694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.398730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:11.800 [2024-12-13 23:49:42.398741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.398751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.398802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.398812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:11.800 [2024-12-13 23:49:42.398818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.398827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.398882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.398894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:11.800 [2024-12-13 23:49:42.398902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.398915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.398929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.398937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:11.800 [2024-12-13 23:49:42.398944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.398957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.462174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.462351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.800 [2024-12-13 23:49:42.462366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.462377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.486915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.486943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.800 [2024-12-13 23:49:42.486951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.486960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.487024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.800 [2024-12-13 23:49:42.487030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.487041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.487082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.800 [2024-12-13 23:49:42.487089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.487096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.487178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.800 [2024-12-13 23:49:42.487184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.487191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.487229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:11.800 [2024-12-13 23:49:42.487236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.487244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.487284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.800 [2024-12-13 23:49:42.487291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.487300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.800 [2024-12-13 23:49:42.487351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.800 [2024-12-13 23:49:42.487357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.800 [2024-12-13 23:49:42.487365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.800 [2024-12-13 23:49:42.487476] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 517.003 ms, result 0 00:16:11.800 true 00:16:11.800 23:49:42 -- ftl/bdevperf.sh@37 -- # killprocess 71600 00:16:11.800 23:49:42 -- common/autotest_common.sh@936 -- # '[' -z 71600 ']' 00:16:11.800 23:49:42 -- common/autotest_common.sh@940 -- # kill -0 71600 00:16:11.800 23:49:42 -- common/autotest_common.sh@941 -- # uname 00:16:11.800 23:49:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.800 23:49:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71600 00:16:12.061 killing process with pid 71600 00:16:12.061 Received shutdown signal, test time was about 4.000000 seconds 00:16:12.061 00:16:12.061 Latency(us) 00:16:12.061 [2024-12-13T23:49:42.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.061 [2024-12-13T23:49:42.793Z] =================================================================================================================== 00:16:12.061 [2024-12-13T23:49:42.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.061 23:49:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:12.061 23:49:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:12.061 23:49:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71600' 00:16:12.061 23:49:42 -- common/autotest_common.sh@955 -- # kill 71600 00:16:12.061 23:49:42 -- common/autotest_common.sh@960 -- # wait 71600 00:16:17.353 23:49:47 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:17.353 23:49:47 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:17.353 23:49:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.353 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.353 Remove shared memory files 00:16:17.353 23:49:47 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:17.353 23:49:47 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:17.353 23:49:47 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:17.353 23:49:47 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:17.353 23:49:47 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:17.353 23:49:47 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:17.353 23:49:47 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:17.353 ************************************ 00:16:17.353 END TEST ftl_bdevperf 00:16:17.353 ************************************ 00:16:17.353 00:16:17.353 real 0m27.185s 00:16:17.353 user 0m29.307s 00:16:17.353 sys 0m1.045s 00:16:17.353 23:49:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:17.353 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.353 23:49:47 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:17.353 23:49:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:17.353 23:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.353 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.353 ************************************ 00:16:17.353 START TEST ftl_trim 00:16:17.353 ************************************ 00:16:17.353 23:49:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:17.353 * Looking for test storage... 00:16:17.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:17.353 23:49:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:17.353 23:49:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:17.353 23:49:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:17.618 23:49:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:17.618 23:49:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:17.618 23:49:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:17.618 23:49:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:17.618 23:49:48 -- scripts/common.sh@335 -- # IFS=.-: 00:16:17.618 23:49:48 -- scripts/common.sh@335 -- # read -ra ver1 00:16:17.618 23:49:48 -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.618 23:49:48 -- scripts/common.sh@336 -- # read -ra ver2 00:16:17.618 23:49:48 -- scripts/common.sh@337 -- # local 'op=<' 00:16:17.618 23:49:48 -- scripts/common.sh@339 -- # ver1_l=2 00:16:17.618 23:49:48 -- scripts/common.sh@340 -- # ver2_l=1 00:16:17.618 23:49:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:17.618 23:49:48 -- scripts/common.sh@343 -- # case "$op" in 00:16:17.618 23:49:48 -- scripts/common.sh@344 -- # : 1 00:16:17.618 23:49:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:17.618 23:49:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.618 23:49:48 -- scripts/common.sh@364 -- # decimal 1 00:16:17.618 23:49:48 -- scripts/common.sh@352 -- # local d=1 00:16:17.618 23:49:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.618 23:49:48 -- scripts/common.sh@354 -- # echo 1 00:16:17.618 23:49:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:17.618 23:49:48 -- scripts/common.sh@365 -- # decimal 2 00:16:17.618 23:49:48 -- scripts/common.sh@352 -- # local d=2 00:16:17.618 23:49:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.618 23:49:48 -- scripts/common.sh@354 -- # echo 2 00:16:17.618 23:49:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:17.618 23:49:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:17.618 23:49:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:17.618 23:49:48 -- scripts/common.sh@367 -- # return 0 00:16:17.618 23:49:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.618 23:49:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:17.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.618 --rc genhtml_branch_coverage=1 00:16:17.618 --rc genhtml_function_coverage=1 00:16:17.618 --rc genhtml_legend=1 00:16:17.618 --rc geninfo_all_blocks=1 00:16:17.618 --rc geninfo_unexecuted_blocks=1 00:16:17.618 00:16:17.618 ' 00:16:17.618 23:49:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:17.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.618 --rc genhtml_branch_coverage=1 00:16:17.618 --rc genhtml_function_coverage=1 00:16:17.618 --rc genhtml_legend=1 00:16:17.619 --rc geninfo_all_blocks=1 00:16:17.619 --rc geninfo_unexecuted_blocks=1 00:16:17.619 00:16:17.619 ' 00:16:17.619 23:49:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:17.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.619 --rc genhtml_branch_coverage=1 00:16:17.619 --rc genhtml_function_coverage=1 00:16:17.619 --rc genhtml_legend=1 00:16:17.619 --rc geninfo_all_blocks=1 00:16:17.619 --rc geninfo_unexecuted_blocks=1 00:16:17.619 00:16:17.619 ' 00:16:17.619 23:49:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:17.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.619 --rc genhtml_branch_coverage=1 00:16:17.619 --rc genhtml_function_coverage=1 00:16:17.619 --rc genhtml_legend=1 00:16:17.619 --rc geninfo_all_blocks=1 00:16:17.619 --rc geninfo_unexecuted_blocks=1 00:16:17.619 00:16:17.619 ' 00:16:17.619 23:49:48 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:17.619 23:49:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:17.619 23:49:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:17.619 23:49:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:17.619 23:49:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:17.619 23:49:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:17.619 23:49:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.619 23:49:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:17.619 23:49:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:17.619 23:49:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:17.619 23:49:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:17.619 23:49:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:17.619 23:49:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:17.619 23:49:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:17.619 23:49:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:17.619 23:49:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:17.619 23:49:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:17.619 23:49:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:17.619 23:49:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:17.619 23:49:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:17.619 23:49:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:17.619 23:49:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:17.619 23:49:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:17.619 23:49:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:17.619 23:49:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:17.619 23:49:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:17.619 23:49:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:17.619 23:49:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:17.619 23:49:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:17.619 23:49:48 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.619 23:49:48 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:17.619 23:49:48 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:17.619 23:49:48 -- ftl/trim.sh@25 -- # timeout=240 00:16:17.619 23:49:48 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:17.619 23:49:48 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:17.619 23:49:48 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:17.619 23:49:48 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:17.619 23:49:48 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:17.619 23:49:48 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:17.619 23:49:48 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:17.619 23:49:48 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:17.619 23:49:48 -- ftl/trim.sh@40 -- # svcpid=72025 00:16:17.619 23:49:48 -- ftl/trim.sh@41 -- # waitforlisten 72025 00:16:17.619 23:49:48 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:17.619 23:49:48 -- common/autotest_common.sh@829 -- # '[' -z 72025 ']' 00:16:17.619 23:49:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.619 23:49:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.619 23:49:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.619 23:49:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.619 23:49:48 -- common/autotest_common.sh@10 -- # set +x 00:16:17.619 [2024-12-13 23:49:48.250086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.619 [2024-12-13 23:49:48.250330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72025 ] 00:16:17.915 [2024-12-13 23:49:48.395494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.915 [2024-12-13 23:49:48.563835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.915 [2024-12-13 23:49:48.564399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.915 [2024-12-13 23:49:48.564706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.915 [2024-12-13 23:49:48.564681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.300 23:49:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.300 23:49:49 -- common/autotest_common.sh@862 -- # return 0 00:16:19.300 23:49:49 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:19.300 23:49:49 -- ftl/common.sh@54 -- # local name=nvme0 00:16:19.300 23:49:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:19.300 23:49:49 -- ftl/common.sh@56 -- # local size=103424 00:16:19.300 23:49:49 -- ftl/common.sh@59 -- # local base_bdev 00:16:19.300 23:49:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:19.300 23:49:50 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:19.300 23:49:50 -- ftl/common.sh@62 -- # local base_size 00:16:19.300 23:49:50 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:19.300 23:49:50 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:19.300 23:49:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:19.300 23:49:50 -- common/autotest_common.sh@1369 -- # local bs 00:16:19.300 23:49:50 -- common/autotest_common.sh@1370 -- # local nb 00:16:19.300 23:49:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:19.561 23:49:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:19.561 { 00:16:19.561 "name": "nvme0n1", 00:16:19.561 "aliases": [ 00:16:19.561 "6af54272-72d1-44c5-ad9d-07c061f53814" 00:16:19.561 ], 00:16:19.561 "product_name": "NVMe disk", 00:16:19.561 "block_size": 4096, 00:16:19.561 "num_blocks": 1310720, 00:16:19.561 "uuid": "6af54272-72d1-44c5-ad9d-07c061f53814", 00:16:19.561 "assigned_rate_limits": { 00:16:19.561 "rw_ios_per_sec": 0, 00:16:19.561 "rw_mbytes_per_sec": 0, 00:16:19.561 "r_mbytes_per_sec": 0, 00:16:19.562 "w_mbytes_per_sec": 0 00:16:19.562 }, 00:16:19.562 "claimed": true, 00:16:19.562 "claim_type": "read_many_write_one", 00:16:19.562 "zoned": false, 00:16:19.562 "supported_io_types": { 00:16:19.562 "read": true, 00:16:19.562 "write": true, 00:16:19.562 "unmap": true, 00:16:19.562 "write_zeroes": true, 00:16:19.562 "flush": true, 00:16:19.562 "reset": true, 00:16:19.562 "compare": true, 00:16:19.562 "compare_and_write": false, 00:16:19.562 "abort": true, 00:16:19.562 "nvme_admin": true, 00:16:19.562 "nvme_io": true 00:16:19.562 }, 00:16:19.562 "driver_specific": { 00:16:19.562 "nvme": [ 00:16:19.562 { 00:16:19.562 "pci_address": "0000:00:07.0", 00:16:19.562 "trid": { 00:16:19.562 "trtype": "PCIe", 00:16:19.562 "traddr": "0000:00:07.0" 00:16:19.562 }, 00:16:19.562 "ctrlr_data": { 00:16:19.562 "cntlid": 0, 00:16:19.562 "vendor_id": "0x1b36", 00:16:19.562 "model_number": "QEMU NVMe Ctrl", 00:16:19.562 "serial_number": "12341", 00:16:19.562 "firmware_revision": "8.0.0", 00:16:19.562 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:19.562 "oacs": { 00:16:19.562 "security": 0, 00:16:19.562 "format": 1, 00:16:19.562 "firmware": 0, 00:16:19.562 "ns_manage": 1 00:16:19.562 }, 00:16:19.562 "multi_ctrlr": false, 00:16:19.562 "ana_reporting": false 00:16:19.562 }, 00:16:19.562 "vs": { 00:16:19.562 "nvme_version": "1.4" 00:16:19.562 }, 00:16:19.562 "ns_data": { 00:16:19.562 "id": 1, 00:16:19.562 "can_share": false 00:16:19.562 } 00:16:19.562 } 00:16:19.562 ], 00:16:19.562 "mp_policy": "active_passive" 00:16:19.562 } 00:16:19.562 } 00:16:19.562 ]' 00:16:19.562 23:49:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:19.562 23:49:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:19.562 23:49:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:19.562 23:49:50 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:19.562 23:49:50 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:19.562 23:49:50 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:19.562 23:49:50 -- ftl/common.sh@63 -- # base_size=5120 00:16:19.562 23:49:50 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:19.562 23:49:50 -- ftl/common.sh@67 -- # clear_lvols 00:16:19.562 23:49:50 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:19.562 23:49:50 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:19.823 23:49:50 -- ftl/common.sh@28 -- # stores=7a63e27b-c04b-44aa-ac1b-868b5a3a9fed 00:16:19.823 23:49:50 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:19.823 23:49:50 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a63e27b-c04b-44aa-ac1b-868b5a3a9fed 00:16:20.084 23:49:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:20.343 23:49:50 -- ftl/common.sh@68 -- # lvs=1138766e-0956-4a64-aa01-efb9dc24487c 00:16:20.343 23:49:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1138766e-0956-4a64-aa01-efb9dc24487c 00:16:20.343 23:49:51 -- ftl/trim.sh@43 -- # split_bdev=490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.343 23:49:51 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.343 23:49:51 -- ftl/common.sh@35 -- # local name=nvc0 00:16:20.343 23:49:51 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:20.343 23:49:51 -- ftl/common.sh@37 -- # local base_bdev=490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.343 23:49:51 -- ftl/common.sh@38 -- # local cache_size= 00:16:20.343 23:49:51 -- ftl/common.sh@41 -- # get_bdev_size 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.343 23:49:51 -- common/autotest_common.sh@1367 -- # local bdev_name=490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.343 23:49:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:20.343 23:49:51 -- common/autotest_common.sh@1369 -- # local bs 00:16:20.343 23:49:51 -- common/autotest_common.sh@1370 -- # local nb 00:16:20.343 23:49:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.601 23:49:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:20.601 { 00:16:20.601 "name": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:20.601 "aliases": [ 00:16:20.601 "lvs/nvme0n1p0" 00:16:20.601 ], 00:16:20.601 "product_name": "Logical Volume", 00:16:20.601 "block_size": 4096, 00:16:20.601 "num_blocks": 26476544, 00:16:20.601 "uuid": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:20.601 "assigned_rate_limits": { 00:16:20.601 "rw_ios_per_sec": 0, 00:16:20.601 "rw_mbytes_per_sec": 0, 00:16:20.601 "r_mbytes_per_sec": 0, 00:16:20.601 "w_mbytes_per_sec": 0 00:16:20.601 }, 00:16:20.601 "claimed": false, 00:16:20.601 "zoned": false, 00:16:20.601 "supported_io_types": { 00:16:20.601 "read": true, 00:16:20.601 "write": true, 00:16:20.601 "unmap": true, 00:16:20.601 "write_zeroes": true, 00:16:20.601 "flush": false, 00:16:20.601 "reset": true, 00:16:20.601 "compare": false, 00:16:20.601 "compare_and_write": false, 00:16:20.601 "abort": false, 00:16:20.601 "nvme_admin": false, 00:16:20.601 "nvme_io": false 00:16:20.601 }, 00:16:20.601 "driver_specific": { 00:16:20.601 "lvol": { 00:16:20.601 "lvol_store_uuid": "1138766e-0956-4a64-aa01-efb9dc24487c", 00:16:20.601 "base_bdev": "nvme0n1", 00:16:20.601 "thin_provision": true, 00:16:20.601 "snapshot": false, 00:16:20.601 "clone": false, 00:16:20.601 "esnap_clone": false 00:16:20.601 } 00:16:20.601 } 00:16:20.601 } 00:16:20.601 ]' 00:16:20.601 23:49:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:20.601 23:49:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:20.602 23:49:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:20.602 23:49:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:20.602 23:49:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:20.602 23:49:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:20.602 23:49:51 -- ftl/common.sh@41 -- # local base_size=5171 00:16:20.602 23:49:51 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:20.602 23:49:51 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:20.860 23:49:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:20.860 23:49:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:20.860 23:49:51 -- ftl/common.sh@48 -- # get_bdev_size 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.860 23:49:51 -- common/autotest_common.sh@1367 -- # local bdev_name=490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:20.860 23:49:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:20.860 23:49:51 -- common/autotest_common.sh@1369 -- # local bs 00:16:20.860 23:49:51 -- common/autotest_common.sh@1370 -- # local nb 00:16:20.860 23:49:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:21.118 23:49:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:21.118 { 00:16:21.118 "name": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:21.118 "aliases": [ 00:16:21.118 "lvs/nvme0n1p0" 00:16:21.118 ], 00:16:21.118 "product_name": "Logical Volume", 00:16:21.118 "block_size": 4096, 00:16:21.118 "num_blocks": 26476544, 00:16:21.118 "uuid": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:21.118 "assigned_rate_limits": { 00:16:21.118 "rw_ios_per_sec": 0, 00:16:21.118 "rw_mbytes_per_sec": 0, 00:16:21.118 "r_mbytes_per_sec": 0, 00:16:21.118 "w_mbytes_per_sec": 0 00:16:21.118 }, 00:16:21.118 "claimed": false, 00:16:21.118 "zoned": false, 00:16:21.118 "supported_io_types": { 00:16:21.118 "read": true, 00:16:21.118 "write": true, 00:16:21.118 "unmap": true, 00:16:21.118 "write_zeroes": true, 00:16:21.118 "flush": false, 00:16:21.118 "reset": true, 00:16:21.118 "compare": false, 00:16:21.118 "compare_and_write": false, 00:16:21.118 "abort": false, 00:16:21.118 "nvme_admin": false, 00:16:21.118 "nvme_io": false 00:16:21.118 }, 00:16:21.118 "driver_specific": { 00:16:21.118 "lvol": { 00:16:21.118 "lvol_store_uuid": "1138766e-0956-4a64-aa01-efb9dc24487c", 00:16:21.118 "base_bdev": "nvme0n1", 00:16:21.118 "thin_provision": true, 00:16:21.118 "snapshot": false, 00:16:21.118 "clone": false, 00:16:21.118 "esnap_clone": false 00:16:21.118 } 00:16:21.118 } 00:16:21.118 } 00:16:21.118 ]' 00:16:21.118 23:49:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:21.119 23:49:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:21.119 23:49:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:21.119 23:49:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:21.119 23:49:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:21.119 23:49:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:21.119 23:49:51 -- ftl/common.sh@48 -- # cache_size=5171 00:16:21.119 23:49:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:21.377 23:49:51 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:21.377 23:49:51 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:21.377 23:49:51 -- ftl/trim.sh@47 -- # get_bdev_size 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:21.377 23:49:51 -- common/autotest_common.sh@1367 -- # local bdev_name=490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:21.377 23:49:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:21.377 23:49:51 -- common/autotest_common.sh@1369 -- # local bs 00:16:21.377 23:49:51 -- common/autotest_common.sh@1370 -- # local nb 00:16:21.377 23:49:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 490e8c7d-c4d6-4162-8be3-5e5f12272b9b 00:16:21.635 23:49:52 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:21.635 { 00:16:21.635 "name": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:21.635 "aliases": [ 00:16:21.635 "lvs/nvme0n1p0" 00:16:21.635 ], 00:16:21.635 "product_name": "Logical Volume", 00:16:21.635 "block_size": 4096, 00:16:21.635 "num_blocks": 26476544, 00:16:21.635 "uuid": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:21.635 "assigned_rate_limits": { 00:16:21.635 "rw_ios_per_sec": 0, 00:16:21.635 "rw_mbytes_per_sec": 0, 00:16:21.635 "r_mbytes_per_sec": 0, 00:16:21.635 "w_mbytes_per_sec": 0 00:16:21.635 }, 00:16:21.635 "claimed": false, 00:16:21.635 "zoned": false, 00:16:21.635 "supported_io_types": { 00:16:21.635 "read": true, 00:16:21.635 "write": true, 00:16:21.635 "unmap": true, 00:16:21.635 "write_zeroes": true, 00:16:21.635 "flush": false, 00:16:21.635 "reset": true, 00:16:21.635 "compare": false, 00:16:21.635 "compare_and_write": false, 00:16:21.635 "abort": false, 00:16:21.635 "nvme_admin": false, 00:16:21.635 "nvme_io": false 00:16:21.635 }, 00:16:21.635 "driver_specific": { 00:16:21.635 "lvol": { 00:16:21.636 "lvol_store_uuid": "1138766e-0956-4a64-aa01-efb9dc24487c", 00:16:21.636 "base_bdev": "nvme0n1", 00:16:21.636 "thin_provision": true, 00:16:21.636 "snapshot": false, 00:16:21.636 "clone": false, 00:16:21.636 "esnap_clone": false 00:16:21.636 } 00:16:21.636 } 00:16:21.636 } 00:16:21.636 ]' 00:16:21.636 23:49:52 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:21.636 23:49:52 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:21.636 23:49:52 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:21.636 23:49:52 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:21.636 23:49:52 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:21.636 23:49:52 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:21.636 23:49:52 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:21.636 23:49:52 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 490e8c7d-c4d6-4162-8be3-5e5f12272b9b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:21.895 [2024-12-13 23:49:52.406165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.406205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:21.895 [2024-12-13 23:49:52.406219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:21.895 [2024-12-13 23:49:52.406226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.408591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.408619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.895 [2024-12-13 23:49:52.408629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:16:21.895 [2024-12-13 23:49:52.408636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.408718] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:21.895 [2024-12-13 23:49:52.409281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:21.895 [2024-12-13 23:49:52.409308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.409315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.895 [2024-12-13 23:49:52.409325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:16:21.895 [2024-12-13 23:49:52.409331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.409426] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:16:21.895 [2024-12-13 23:49:52.410674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.410702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:21.895 [2024-12-13 23:49:52.410711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:21.895 [2024-12-13 23:49:52.410722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.417510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.417537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.895 [2024-12-13 23:49:52.417545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.712 ms 00:16:21.895 [2024-12-13 23:49:52.417552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.417695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.417710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.895 [2024-12-13 23:49:52.417717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:21.895 [2024-12-13 23:49:52.417728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.417765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.417773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:21.895 [2024-12-13 23:49:52.417779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:21.895 [2024-12-13 23:49:52.417786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.417820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:21.895 [2024-12-13 23:49:52.421143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.421167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.895 [2024-12-13 23:49:52.421177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:16:21.895 [2024-12-13 23:49:52.421183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.421247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.421256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:21.895 [2024-12-13 23:49:52.421264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:21.895 [2024-12-13 23:49:52.421272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.421307] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:21.895 [2024-12-13 23:49:52.421395] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:21.895 [2024-12-13 23:49:52.421409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:21.895 [2024-12-13 23:49:52.421418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:21.895 [2024-12-13 23:49:52.421427] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421436] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421445] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:21.895 [2024-12-13 23:49:52.421451] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:21.895 [2024-12-13 23:49:52.421459] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:21.895 [2024-12-13 23:49:52.421464] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:21.895 [2024-12-13 23:49:52.421471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.421478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:21.895 [2024-12-13 23:49:52.421506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:16:21.895 [2024-12-13 23:49:52.421511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.421575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.895 [2024-12-13 23:49:52.421584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:21.895 [2024-12-13 23:49:52.421592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:21.895 [2024-12-13 23:49:52.421598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.895 [2024-12-13 23:49:52.421694] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:21.895 [2024-12-13 23:49:52.421703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:21.895 [2024-12-13 23:49:52.421712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:21.895 [2024-12-13 23:49:52.421731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:21.895 [2024-12-13 23:49:52.421750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.895 [2024-12-13 23:49:52.421762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:21.895 [2024-12-13 23:49:52.421767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:21.895 [2024-12-13 23:49:52.421773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.895 [2024-12-13 23:49:52.421778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:21.895 [2024-12-13 23:49:52.421785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:21.895 [2024-12-13 23:49:52.421790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:21.895 [2024-12-13 23:49:52.421804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:21.895 [2024-12-13 23:49:52.421810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:21.895 [2024-12-13 23:49:52.421825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:21.895 [2024-12-13 23:49:52.421830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:21.895 [2024-12-13 23:49:52.421842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:21.895 [2024-12-13 23:49:52.421860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:21.895 [2024-12-13 23:49:52.421876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:21.895 [2024-12-13 23:49:52.421895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.895 [2024-12-13 23:49:52.421907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:21.895 [2024-12-13 23:49:52.421912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:21.895 [2024-12-13 23:49:52.421919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.895 [2024-12-13 23:49:52.421923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:21.895 [2024-12-13 23:49:52.421930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:21.895 [2024-12-13 23:49:52.421934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.896 [2024-12-13 23:49:52.421941] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:21.896 [2024-12-13 23:49:52.421948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:21.896 [2024-12-13 23:49:52.421955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.896 [2024-12-13 23:49:52.421963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.896 [2024-12-13 23:49:52.421971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:21.896 [2024-12-13 23:49:52.421977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:21.896 [2024-12-13 23:49:52.421983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:21.896 [2024-12-13 23:49:52.421987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:21.896 [2024-12-13 23:49:52.421996] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:21.896 [2024-12-13 23:49:52.422001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:21.896 [2024-12-13 23:49:52.422009] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:21.896 [2024-12-13 23:49:52.422016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.896 [2024-12-13 23:49:52.422025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:21.896 [2024-12-13 23:49:52.422031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:21.896 [2024-12-13 23:49:52.422037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:21.896 [2024-12-13 23:49:52.422044] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:21.896 [2024-12-13 23:49:52.422051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:21.896 [2024-12-13 23:49:52.422056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:21.896 [2024-12-13 23:49:52.422062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:21.896 [2024-12-13 23:49:52.422068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:21.896 [2024-12-13 23:49:52.422075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:21.896 [2024-12-13 23:49:52.422080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:21.896 [2024-12-13 23:49:52.422087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:21.896 [2024-12-13 23:49:52.422093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:21.896 [2024-12-13 23:49:52.422103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:21.896 [2024-12-13 23:49:52.422108] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:21.896 [2024-12-13 23:49:52.422116] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.896 [2024-12-13 23:49:52.422122] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:21.896 [2024-12-13 23:49:52.422130] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:21.896 [2024-12-13 23:49:52.422135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:21.896 [2024-12-13 23:49:52.422142] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:21.896 [2024-12-13 23:49:52.422148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.422155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:21.896 [2024-12-13 23:49:52.422161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:16:21.896 [2024-12-13 23:49:52.422168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.436158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.436274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.896 [2024-12-13 23:49:52.436287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.924 ms 00:16:21.896 [2024-12-13 23:49:52.436296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.436405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.436418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:21.896 [2024-12-13 23:49:52.436425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:21.896 [2024-12-13 23:49:52.436432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.464305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.464341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.896 [2024-12-13 23:49:52.464350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.843 ms 00:16:21.896 [2024-12-13 23:49:52.464359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.464412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.464421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.896 [2024-12-13 23:49:52.464428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:21.896 [2024-12-13 23:49:52.464438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.464836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.464857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.896 [2024-12-13 23:49:52.464865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:16:21.896 [2024-12-13 23:49:52.464872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.464975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.464986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.896 [2024-12-13 23:49:52.464992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:21.896 [2024-12-13 23:49:52.465000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.488551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.488720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.896 [2024-12-13 23:49:52.488745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.518 ms 00:16:21.896 [2024-12-13 23:49:52.488761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.499167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:21.896 [2024-12-13 23:49:52.514342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.514369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:21.896 [2024-12-13 23:49:52.514379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.430 ms 00:16:21.896 [2024-12-13 23:49:52.514386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.590915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.896 [2024-12-13 23:49:52.590955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:21.896 [2024-12-13 23:49:52.590970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.464 ms 00:16:21.896 [2024-12-13 23:49:52.590979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.896 [2024-12-13 23:49:52.591062] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:21.896 [2024-12-13 23:49:52.591076] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:24.425 [2024-12-13 23:49:55.121688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.425 [2024-12-13 23:49:55.121737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:24.425 [2024-12-13 23:49:55.121755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2530.614 ms 00:16:24.425 [2024-12-13 23:49:55.121765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.425 [2024-12-13 23:49:55.121988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.425 [2024-12-13 23:49:55.122001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.425 [2024-12-13 23:49:55.122013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:16:24.425 [2024-12-13 23:49:55.122021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.425 [2024-12-13 23:49:55.145826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.425 [2024-12-13 23:49:55.145859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:24.425 [2024-12-13 23:49:55.145872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.768 ms 00:16:24.425 [2024-12-13 23:49:55.145880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.683 [2024-12-13 23:49:55.169260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.683 [2024-12-13 23:49:55.169286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:24.683 [2024-12-13 23:49:55.169301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.326 ms 00:16:24.683 [2024-12-13 23:49:55.169307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.683 [2024-12-13 23:49:55.169644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.683 [2024-12-13 23:49:55.169654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.683 [2024-12-13 23:49:55.169664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:16:24.683 [2024-12-13 23:49:55.169674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.683 [2024-12-13 23:49:55.232156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.683 [2024-12-13 23:49:55.232186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:24.683 [2024-12-13 23:49:55.232198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.449 ms 00:16:24.683 [2024-12-13 23:49:55.232206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.683 [2024-12-13 23:49:55.257432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.683 [2024-12-13 23:49:55.257460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:24.683 [2024-12-13 23:49:55.257473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.155 ms 00:16:24.683 [2024-12-13 23:49:55.257490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.683 [2024-12-13 23:49:55.261950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.683 [2024-12-13 23:49:55.261980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:24.683 [2024-12-13 23:49:55.261994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:16:24.683 [2024-12-13 23:49:55.262001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.683 [2024-12-13 23:49:55.286243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.684 [2024-12-13 23:49:55.286270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.684 [2024-12-13 23:49:55.286282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.182 ms 00:16:24.684 [2024-12-13 23:49:55.286288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.684 [2024-12-13 23:49:55.286354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.684 [2024-12-13 23:49:55.286363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.684 [2024-12-13 23:49:55.286373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.684 [2024-12-13 23:49:55.286382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.684 [2024-12-13 23:49:55.286464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.684 [2024-12-13 23:49:55.286498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.684 [2024-12-13 23:49:55.286508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:24.684 [2024-12-13 23:49:55.286514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.684 [2024-12-13 23:49:55.287391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.684 [2024-12-13 23:49:55.290513] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2880.917 ms, result 0 00:16:24.684 [2024-12-13 23:49:55.291758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:24.684 { 00:16:24.684 "name": "ftl0", 00:16:24.684 "uuid": "d3f1ad20-b815-42d4-97fb-b3e13195de9e" 00:16:24.684 } 00:16:24.684 23:49:55 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:24.684 23:49:55 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:24.684 23:49:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:24.684 23:49:55 -- common/autotest_common.sh@899 -- # local i 00:16:24.684 23:49:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:24.684 23:49:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:24.684 23:49:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:24.942 23:49:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:25.200 [ 00:16:25.200 { 00:16:25.200 "name": "ftl0", 00:16:25.200 "aliases": [ 00:16:25.200 "d3f1ad20-b815-42d4-97fb-b3e13195de9e" 00:16:25.200 ], 00:16:25.200 "product_name": "FTL disk", 00:16:25.200 "block_size": 4096, 00:16:25.200 "num_blocks": 23592960, 00:16:25.200 "uuid": "d3f1ad20-b815-42d4-97fb-b3e13195de9e", 00:16:25.200 "assigned_rate_limits": { 00:16:25.200 "rw_ios_per_sec": 0, 00:16:25.200 "rw_mbytes_per_sec": 0, 00:16:25.200 "r_mbytes_per_sec": 0, 00:16:25.200 "w_mbytes_per_sec": 0 00:16:25.200 }, 00:16:25.200 "claimed": false, 00:16:25.200 "zoned": false, 00:16:25.200 "supported_io_types": { 00:16:25.200 "read": true, 00:16:25.200 "write": true, 00:16:25.200 "unmap": true, 00:16:25.200 "write_zeroes": true, 00:16:25.200 "flush": true, 00:16:25.200 "reset": false, 00:16:25.200 "compare": false, 00:16:25.200 "compare_and_write": false, 00:16:25.200 "abort": false, 00:16:25.200 "nvme_admin": false, 00:16:25.200 "nvme_io": false 00:16:25.200 }, 00:16:25.200 "driver_specific": { 00:16:25.200 "ftl": { 00:16:25.200 "base_bdev": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:25.200 "cache": "nvc0n1p0" 00:16:25.200 } 00:16:25.200 } 00:16:25.200 } 00:16:25.200 ] 00:16:25.200 23:49:55 -- common/autotest_common.sh@905 -- # return 0 00:16:25.200 23:49:55 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:25.200 23:49:55 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:25.200 23:49:55 -- ftl/trim.sh@56 -- # echo ']}' 00:16:25.200 23:49:55 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:25.458 23:49:56 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:25.458 { 00:16:25.458 "name": "ftl0", 00:16:25.458 "aliases": [ 00:16:25.458 "d3f1ad20-b815-42d4-97fb-b3e13195de9e" 00:16:25.458 ], 00:16:25.458 "product_name": "FTL disk", 00:16:25.458 "block_size": 4096, 00:16:25.458 "num_blocks": 23592960, 00:16:25.458 "uuid": "d3f1ad20-b815-42d4-97fb-b3e13195de9e", 00:16:25.458 "assigned_rate_limits": { 00:16:25.458 "rw_ios_per_sec": 0, 00:16:25.458 "rw_mbytes_per_sec": 0, 00:16:25.458 "r_mbytes_per_sec": 0, 00:16:25.458 "w_mbytes_per_sec": 0 00:16:25.458 }, 00:16:25.458 "claimed": false, 00:16:25.458 "zoned": false, 00:16:25.458 "supported_io_types": { 00:16:25.459 "read": true, 00:16:25.459 "write": true, 00:16:25.459 "unmap": true, 00:16:25.459 "write_zeroes": true, 00:16:25.459 "flush": true, 00:16:25.459 "reset": false, 00:16:25.459 "compare": false, 00:16:25.459 "compare_and_write": false, 00:16:25.459 "abort": false, 00:16:25.459 "nvme_admin": false, 00:16:25.459 "nvme_io": false 00:16:25.459 }, 00:16:25.459 "driver_specific": { 00:16:25.459 "ftl": { 00:16:25.459 "base_bdev": "490e8c7d-c4d6-4162-8be3-5e5f12272b9b", 00:16:25.459 "cache": "nvc0n1p0" 00:16:25.459 } 00:16:25.459 } 00:16:25.459 } 00:16:25.459 ]' 00:16:25.459 23:49:56 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:25.459 23:49:56 -- ftl/trim.sh@60 -- # nb=23592960 00:16:25.459 23:49:56 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:25.718 [2024-12-13 23:49:56.251198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.251236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:25.718 [2024-12-13 23:49:56.251247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:25.718 [2024-12-13 23:49:56.251255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.251288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:25.718 [2024-12-13 23:49:56.253501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.253523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:25.718 [2024-12-13 23:49:56.253533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:16:25.718 [2024-12-13 23:49:56.253540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.254037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.254053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:25.718 [2024-12-13 23:49:56.254064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:16:25.718 [2024-12-13 23:49:56.254070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.256844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.256858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:25.718 [2024-12-13 23:49:56.256869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:16:25.718 [2024-12-13 23:49:56.256877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.262085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.262107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:25.718 [2024-12-13 23:49:56.262116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.171 ms 00:16:25.718 [2024-12-13 23:49:56.262122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.282492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.282525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:25.718 [2024-12-13 23:49:56.282536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.276 ms 00:16:25.718 [2024-12-13 23:49:56.282541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.294794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.294818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:25.718 [2024-12-13 23:49:56.294828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:16:25.718 [2024-12-13 23:49:56.294835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.718 [2024-12-13 23:49:56.295022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.718 [2024-12-13 23:49:56.295038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:25.718 [2024-12-13 23:49:56.295048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:25.718 [2024-12-13 23:49:56.295055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.719 [2024-12-13 23:49:56.313057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.719 [2024-12-13 23:49:56.313080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:25.719 [2024-12-13 23:49:56.313089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.974 ms 00:16:25.719 [2024-12-13 23:49:56.313095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.719 [2024-12-13 23:49:56.330834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.719 [2024-12-13 23:49:56.330855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:25.719 [2024-12-13 23:49:56.330864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.691 ms 00:16:25.719 [2024-12-13 23:49:56.330870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.719 [2024-12-13 23:49:56.348108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.719 [2024-12-13 23:49:56.348130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:25.719 [2024-12-13 23:49:56.348139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.190 ms 00:16:25.719 [2024-12-13 23:49:56.348144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.719 [2024-12-13 23:49:56.365348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.719 [2024-12-13 23:49:56.365369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:25.719 [2024-12-13 23:49:56.365381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.112 ms 00:16:25.719 [2024-12-13 23:49:56.365386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.719 [2024-12-13 23:49:56.365436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:25.719 [2024-12-13 23:49:56.365450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.365996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:25.719 [2024-12-13 23:49:56.366060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:25.720 [2024-12-13 23:49:56.366233] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:25.720 [2024-12-13 23:49:56.366241] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:16:25.720 [2024-12-13 23:49:56.366247] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:25.720 [2024-12-13 23:49:56.366254] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:25.720 [2024-12-13 23:49:56.366260] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:25.720 [2024-12-13 23:49:56.366267] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:25.720 [2024-12-13 23:49:56.366272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:25.720 [2024-12-13 23:49:56.366280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:25.720 [2024-12-13 23:49:56.366288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:25.720 [2024-12-13 23:49:56.366295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:25.720 [2024-12-13 23:49:56.366300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:25.720 [2024-12-13 23:49:56.366307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.720 [2024-12-13 23:49:56.366313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:25.720 [2024-12-13 23:49:56.366321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:16:25.720 [2024-12-13 23:49:56.366327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.720 [2024-12-13 23:49:56.376467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.720 [2024-12-13 23:49:56.376496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:25.720 [2024-12-13 23:49:56.376506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.113 ms 00:16:25.720 [2024-12-13 23:49:56.376512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.720 [2024-12-13 23:49:56.376698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.720 [2024-12-13 23:49:56.376711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:25.720 [2024-12-13 23:49:56.376719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:25.720 [2024-12-13 23:49:56.376725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.720 [2024-12-13 23:49:56.413485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.720 [2024-12-13 23:49:56.413508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.720 [2024-12-13 23:49:56.413520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.720 [2024-12-13 23:49:56.413528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.720 [2024-12-13 23:49:56.413606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.720 [2024-12-13 23:49:56.413614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.720 [2024-12-13 23:49:56.413623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.720 [2024-12-13 23:49:56.413629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.720 [2024-12-13 23:49:56.413686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.720 [2024-12-13 23:49:56.413695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.720 [2024-12-13 23:49:56.413703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.720 [2024-12-13 23:49:56.413709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.720 [2024-12-13 23:49:56.413738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.720 [2024-12-13 23:49:56.413744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.720 [2024-12-13 23:49:56.413752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.720 [2024-12-13 23:49:56.413758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.482921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.482953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.979 [2024-12-13 23:49:56.482966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.482975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.979 [2024-12-13 23:49:56.507196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.979 [2024-12-13 23:49:56.507274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.979 [2024-12-13 23:49:56.507344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.979 [2024-12-13 23:49:56.507476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.979 [2024-12-13 23:49:56.507565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.979 [2024-12-13 23:49:56.507641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.979 [2024-12-13 23:49:56.507708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.979 [2024-12-13 23:49:56.507716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.979 [2024-12-13 23:49:56.507723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.979 [2024-12-13 23:49:56.507912] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 256.694 ms, result 0 00:16:25.979 true 00:16:25.979 23:49:56 -- ftl/trim.sh@63 -- # killprocess 72025 00:16:25.979 23:49:56 -- common/autotest_common.sh@936 -- # '[' -z 72025 ']' 00:16:25.979 23:49:56 -- common/autotest_common.sh@940 -- # kill -0 72025 00:16:25.979 23:49:56 -- common/autotest_common.sh@941 -- # uname 00:16:25.979 23:49:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.979 23:49:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72025 00:16:25.979 killing process with pid 72025 00:16:25.979 23:49:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.979 23:49:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.979 23:49:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72025' 00:16:25.979 23:49:56 -- common/autotest_common.sh@955 -- # kill 72025 00:16:25.979 23:49:56 -- common/autotest_common.sh@960 -- # wait 72025 00:16:32.542 23:50:02 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:33.113 65536+0 records in 00:16:33.113 65536+0 records out 00:16:33.113 268435456 bytes (268 MB, 256 MiB) copied, 1.01615 s, 264 MB/s 00:16:33.113 23:50:03 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.113 [2024-12-13 23:50:03.706575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:33.113 [2024-12-13 23:50:03.706670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72257 ] 00:16:33.372 [2024-12-13 23:50:03.849517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.372 [2024-12-13 23:50:04.017703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.634 [2024-12-13 23:50:04.241802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.634 [2024-12-13 23:50:04.241854] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.896 [2024-12-13 23:50:04.390853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.896 [2024-12-13 23:50:04.390889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.897 [2024-12-13 23:50:04.390901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:33.897 [2024-12-13 23:50:04.390907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.393087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.393118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.897 [2024-12-13 23:50:04.393126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:16:33.897 [2024-12-13 23:50:04.393133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.393188] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.897 [2024-12-13 23:50:04.393759] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.897 [2024-12-13 23:50:04.393785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.393792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.897 [2024-12-13 23:50:04.393799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:16:33.897 [2024-12-13 23:50:04.393804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.395068] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.897 [2024-12-13 23:50:04.405469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.405503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.897 [2024-12-13 23:50:04.405512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.402 ms 00:16:33.897 [2024-12-13 23:50:04.405519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.405588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.405597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.897 [2024-12-13 23:50:04.405604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:33.897 [2024-12-13 23:50:04.405610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.411804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.411829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.897 [2024-12-13 23:50:04.411836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:16:33.897 [2024-12-13 23:50:04.411845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.411924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.411932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.897 [2024-12-13 23:50:04.411939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:33.897 [2024-12-13 23:50:04.411945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.411963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.411971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.897 [2024-12-13 23:50:04.411978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:33.897 [2024-12-13 23:50:04.411984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.412011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.897 [2024-12-13 23:50:04.415157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.415179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.897 [2024-12-13 23:50:04.415186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:16:33.897 [2024-12-13 23:50:04.415194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.415227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.415234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.897 [2024-12-13 23:50:04.415240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:33.897 [2024-12-13 23:50:04.415245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.415260] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.897 [2024-12-13 23:50:04.415275] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:33.897 [2024-12-13 23:50:04.415302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.897 [2024-12-13 23:50:04.415317] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:33.897 [2024-12-13 23:50:04.415376] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:33.897 [2024-12-13 23:50:04.415384] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.897 [2024-12-13 23:50:04.415392] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:33.897 [2024-12-13 23:50:04.415401] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415408] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415414] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.897 [2024-12-13 23:50:04.415419] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.897 [2024-12-13 23:50:04.415425] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:33.897 [2024-12-13 23:50:04.415434] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:33.897 [2024-12-13 23:50:04.415439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.415445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.897 [2024-12-13 23:50:04.415451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:16:33.897 [2024-12-13 23:50:04.415456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.415515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.897 [2024-12-13 23:50:04.415523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.897 [2024-12-13 23:50:04.415529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:33.897 [2024-12-13 23:50:04.415535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.897 [2024-12-13 23:50:04.415617] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.897 [2024-12-13 23:50:04.415626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.897 [2024-12-13 23:50:04.415633] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.897 [2024-12-13 23:50:04.415650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.897 [2024-12-13 23:50:04.415670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.897 [2024-12-13 23:50:04.415680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.897 [2024-12-13 23:50:04.415685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.897 [2024-12-13 23:50:04.415690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.897 [2024-12-13 23:50:04.415695] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.897 [2024-12-13 23:50:04.415705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:33.897 [2024-12-13 23:50:04.415710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.897 [2024-12-13 23:50:04.415721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:33.897 [2024-12-13 23:50:04.415726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:33.897 [2024-12-13 23:50:04.415736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:33.897 [2024-12-13 23:50:04.415742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.897 [2024-12-13 23:50:04.415752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.897 [2024-12-13 23:50:04.415768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.897 [2024-12-13 23:50:04.415783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.897 [2024-12-13 23:50:04.415798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.897 [2024-12-13 23:50:04.415808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.897 [2024-12-13 23:50:04.415813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.897 [2024-12-13 23:50:04.415819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.897 [2024-12-13 23:50:04.415824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.897 [2024-12-13 23:50:04.415830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:33.898 [2024-12-13 23:50:04.415836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.898 [2024-12-13 23:50:04.415841] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.898 [2024-12-13 23:50:04.415848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.898 [2024-12-13 23:50:04.415854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.898 [2024-12-13 23:50:04.415862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.898 [2024-12-13 23:50:04.415868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.898 [2024-12-13 23:50:04.415874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.898 [2024-12-13 23:50:04.415879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.898 [2024-12-13 23:50:04.415884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.898 [2024-12-13 23:50:04.415889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.898 [2024-12-13 23:50:04.415895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.898 [2024-12-13 23:50:04.415900] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.898 [2024-12-13 23:50:04.415907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.898 [2024-12-13 23:50:04.415914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.898 [2024-12-13 23:50:04.415920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:33.898 [2024-12-13 23:50:04.415926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.898 [2024-12-13 23:50:04.415931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:33.898 [2024-12-13 23:50:04.415937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:33.898 [2024-12-13 23:50:04.415942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:33.898 [2024-12-13 23:50:04.415947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:33.898 [2024-12-13 23:50:04.415953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:33.898 [2024-12-13 23:50:04.415958] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:33.898 [2024-12-13 23:50:04.415963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:33.898 [2024-12-13 23:50:04.415969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:33.898 [2024-12-13 23:50:04.415974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:33.898 [2024-12-13 23:50:04.415981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:33.898 [2024-12-13 23:50:04.415987] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.898 [2024-12-13 23:50:04.415997] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.898 [2024-12-13 23:50:04.416005] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.898 [2024-12-13 23:50:04.416010] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.898 [2024-12-13 23:50:04.416015] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.898 [2024-12-13 23:50:04.416021] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.898 [2024-12-13 23:50:04.416028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.416036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.898 [2024-12-13 23:50:04.416042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:16:33.898 [2024-12-13 23:50:04.416048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.429853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.429880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.898 [2024-12-13 23:50:04.429889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.762 ms 00:16:33.898 [2024-12-13 23:50:04.429896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.429988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.429996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.898 [2024-12-13 23:50:04.430004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:33.898 [2024-12-13 23:50:04.430011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.471286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.471318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.898 [2024-12-13 23:50:04.471328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.258 ms 00:16:33.898 [2024-12-13 23:50:04.471336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.471395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.471404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.898 [2024-12-13 23:50:04.471414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.898 [2024-12-13 23:50:04.471420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.471826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.471872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.898 [2024-12-13 23:50:04.471879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:16:33.898 [2024-12-13 23:50:04.471885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.471988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.472001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.898 [2024-12-13 23:50:04.472008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:33.898 [2024-12-13 23:50:04.472014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.484962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.484986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.898 [2024-12-13 23:50:04.484993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.929 ms 00:16:33.898 [2024-12-13 23:50:04.485001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.495852] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:33.898 [2024-12-13 23:50:04.495883] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.898 [2024-12-13 23:50:04.495892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.495899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.898 [2024-12-13 23:50:04.495906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.812 ms 00:16:33.898 [2024-12-13 23:50:04.495912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.514668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.514694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.898 [2024-12-13 23:50:04.514707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.702 ms 00:16:33.898 [2024-12-13 23:50:04.514714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.524138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.524163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.898 [2024-12-13 23:50:04.524170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.370 ms 00:16:33.898 [2024-12-13 23:50:04.524182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.533326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.533350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.898 [2024-12-13 23:50:04.533358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.104 ms 00:16:33.898 [2024-12-13 23:50:04.533365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.533669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.533680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.898 [2024-12-13 23:50:04.533687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:33.898 [2024-12-13 23:50:04.533693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.582739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.582770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.898 [2024-12-13 23:50:04.582780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.028 ms 00:16:33.898 [2024-12-13 23:50:04.582787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.590630] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:33.898 [2024-12-13 23:50:04.604953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.604982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.898 [2024-12-13 23:50:04.604993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.102 ms 00:16:33.898 [2024-12-13 23:50:04.605000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.605063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.605070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:33.898 [2024-12-13 23:50:04.605077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:33.898 [2024-12-13 23:50:04.605086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.605128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.605137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:33.898 [2024-12-13 23:50:04.605144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:33.898 [2024-12-13 23:50:04.605151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.898 [2024-12-13 23:50:04.606183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.898 [2024-12-13 23:50:04.606209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:33.899 [2024-12-13 23:50:04.606217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:16:33.899 [2024-12-13 23:50:04.606223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.899 [2024-12-13 23:50:04.606251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.899 [2024-12-13 23:50:04.606258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:33.899 [2024-12-13 23:50:04.606267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:33.899 [2024-12-13 23:50:04.606273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.899 [2024-12-13 23:50:04.606301] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:33.899 [2024-12-13 23:50:04.606309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.899 [2024-12-13 23:50:04.606315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:33.899 [2024-12-13 23:50:04.606321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:33.899 [2024-12-13 23:50:04.606327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.159 [2024-12-13 23:50:04.625497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.159 [2024-12-13 23:50:04.625526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:34.159 [2024-12-13 23:50:04.625534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.152 ms 00:16:34.159 [2024-12-13 23:50:04.625541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.159 [2024-12-13 23:50:04.625611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.159 [2024-12-13 23:50:04.625619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:34.159 [2024-12-13 23:50:04.625626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:34.159 [2024-12-13 23:50:04.625632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.159 [2024-12-13 23:50:04.626450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.159 [2024-12-13 23:50:04.628891] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 235.343 ms, result 0 00:16:34.159 [2024-12-13 23:50:04.629813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:34.159 [2024-12-13 23:50:04.640912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:35.102  [2024-12-13T23:50:06.779Z] Copying: 19/256 [MB] (19 MBps) [2024-12-13T23:50:07.723Z] Copying: 38/256 [MB] (18 MBps) [2024-12-13T23:50:08.668Z] Copying: 59/256 [MB] (20 MBps) [2024-12-13T23:50:10.050Z] Copying: 70/256 [MB] (11 MBps) [2024-12-13T23:50:10.991Z] Copying: 89/256 [MB] (19 MBps) [2024-12-13T23:50:11.934Z] Copying: 112/256 [MB] (22 MBps) [2024-12-13T23:50:12.878Z] Copying: 135/256 [MB] (22 MBps) [2024-12-13T23:50:13.858Z] Copying: 155/256 [MB] (20 MBps) [2024-12-13T23:50:14.824Z] Copying: 177/256 [MB] (22 MBps) [2024-12-13T23:50:15.767Z] Copying: 199/256 [MB] (21 MBps) [2024-12-13T23:50:16.709Z] Copying: 210/256 [MB] (11 MBps) [2024-12-13T23:50:17.653Z] Copying: 221/256 [MB] (11 MBps) [2024-12-13T23:50:19.040Z] Copying: 232/256 [MB] (11 MBps) [2024-12-13T23:50:19.985Z] Copying: 243/256 [MB] (11 MBps) [2024-12-13T23:50:19.985Z] Copying: 255/256 [MB] (11 MBps) [2024-12-13T23:50:19.985Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-13 23:50:19.713064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.253 [2024-12-13 23:50:19.720742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.720773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.254 [2024-12-13 23:50:19.720792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.254 [2024-12-13 23:50:19.720798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.720816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.254 [2024-12-13 23:50:19.722928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.722954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.254 [2024-12-13 23:50:19.722962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:16:49.254 [2024-12-13 23:50:19.722969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.725617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.725643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.254 [2024-12-13 23:50:19.725650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:16:49.254 [2024-12-13 23:50:19.725657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.732023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.732048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.254 [2024-12-13 23:50:19.732056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.347 ms 00:16:49.254 [2024-12-13 23:50:19.732062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.737259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.737282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:49.254 [2024-12-13 23:50:19.737289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.162 ms 00:16:49.254 [2024-12-13 23:50:19.737296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.755549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.755575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.254 [2024-12-13 23:50:19.755583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.214 ms 00:16:49.254 [2024-12-13 23:50:19.755589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.768064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.768093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.254 [2024-12-13 23:50:19.768102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.426 ms 00:16:49.254 [2024-12-13 23:50:19.768108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.768214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.768222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.254 [2024-12-13 23:50:19.768228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:49.254 [2024-12-13 23:50:19.768234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.787102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.787126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:49.254 [2024-12-13 23:50:19.787133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.855 ms 00:16:49.254 [2024-12-13 23:50:19.787139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.805387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.805411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:49.254 [2024-12-13 23:50:19.805419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.214 ms 00:16:49.254 [2024-12-13 23:50:19.805425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.823034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.823059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.254 [2024-12-13 23:50:19.823067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.576 ms 00:16:49.254 [2024-12-13 23:50:19.823073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.840872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.254 [2024-12-13 23:50:19.840896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.254 [2024-12-13 23:50:19.840904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.743 ms 00:16:49.254 [2024-12-13 23:50:19.840909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.254 [2024-12-13 23:50:19.840943] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.254 [2024-12-13 23:50:19.840955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.840998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.254 [2024-12-13 23:50:19.841246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.255 [2024-12-13 23:50:19.841566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.255 [2024-12-13 23:50:19.841572] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:16:49.255 [2024-12-13 23:50:19.841579] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.255 [2024-12-13 23:50:19.841585] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.255 [2024-12-13 23:50:19.841591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.255 [2024-12-13 23:50:19.841598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.255 [2024-12-13 23:50:19.841603] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.255 [2024-12-13 23:50:19.841609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.255 [2024-12-13 23:50:19.841617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.255 [2024-12-13 23:50:19.841622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.255 [2024-12-13 23:50:19.841627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.255 [2024-12-13 23:50:19.841633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.255 [2024-12-13 23:50:19.841638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.255 [2024-12-13 23:50:19.841645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:16:49.255 [2024-12-13 23:50:19.841652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.851774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.255 [2024-12-13 23:50:19.851797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.255 [2024-12-13 23:50:19.851804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.108 ms 00:16:49.255 [2024-12-13 23:50:19.851814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.851983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.255 [2024-12-13 23:50:19.851991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.255 [2024-12-13 23:50:19.851998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:49.255 [2024-12-13 23:50:19.852003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.883446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.883473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.255 [2024-12-13 23:50:19.883493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.883503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.883569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.883576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.255 [2024-12-13 23:50:19.883582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.883588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.883633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.883641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.255 [2024-12-13 23:50:19.883648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.883654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.883671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.883677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.255 [2024-12-13 23:50:19.883683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.883689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.944233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.944263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.255 [2024-12-13 23:50:19.944272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.944282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.968217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.968245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.255 [2024-12-13 23:50:19.968253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.968260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.968306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.968313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.255 [2024-12-13 23:50:19.968320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.255 [2024-12-13 23:50:19.968327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.255 [2024-12-13 23:50:19.968353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.255 [2024-12-13 23:50:19.968363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.256 [2024-12-13 23:50:19.968370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.256 [2024-12-13 23:50:19.968376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.256 [2024-12-13 23:50:19.968455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.256 [2024-12-13 23:50:19.968465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.256 [2024-12-13 23:50:19.968471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.256 [2024-12-13 23:50:19.968478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.256 [2024-12-13 23:50:19.968523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.256 [2024-12-13 23:50:19.968534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.256 [2024-12-13 23:50:19.968541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.256 [2024-12-13 23:50:19.968548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.256 [2024-12-13 23:50:19.968584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.256 [2024-12-13 23:50:19.968591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.256 [2024-12-13 23:50:19.968598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.256 [2024-12-13 23:50:19.968603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.256 [2024-12-13 23:50:19.968645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.256 [2024-12-13 23:50:19.968654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.256 [2024-12-13 23:50:19.968663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.256 [2024-12-13 23:50:19.968669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.256 [2024-12-13 23:50:19.968793] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 248.047 ms, result 0 00:16:50.640 00:16:50.640 00:16:50.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.640 23:50:20 -- ftl/trim.sh@72 -- # svcpid=72444 00:16:50.640 23:50:20 -- ftl/trim.sh@73 -- # waitforlisten 72444 00:16:50.640 23:50:20 -- common/autotest_common.sh@829 -- # '[' -z 72444 ']' 00:16:50.640 23:50:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.640 23:50:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.640 23:50:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.640 23:50:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.640 23:50:20 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:50.640 23:50:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.640 [2024-12-13 23:50:21.043810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.640 [2024-12-13 23:50:21.043930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72444 ] 00:16:50.640 [2024-12-13 23:50:21.194091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.640 [2024-12-13 23:50:21.362670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.640 [2024-12-13 23:50:21.362886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.212 23:50:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.212 23:50:21 -- common/autotest_common.sh@862 -- # return 0 00:16:51.212 23:50:21 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:51.472 [2024-12-13 23:50:22.042721] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:51.472 [2024-12-13 23:50:22.042773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:51.734 [2024-12-13 23:50:22.207371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.207408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:51.734 [2024-12-13 23:50:22.207422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.734 [2024-12-13 23:50:22.207429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.209618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.209647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.734 [2024-12-13 23:50:22.209656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:16:51.734 [2024-12-13 23:50:22.209663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.209721] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:51.734 [2024-12-13 23:50:22.210281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:51.734 [2024-12-13 23:50:22.210338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.210344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.734 [2024-12-13 23:50:22.210353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:16:51.734 [2024-12-13 23:50:22.210360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.211691] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:51.734 [2024-12-13 23:50:22.222341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.222374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:51.734 [2024-12-13 23:50:22.222382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.656 ms 00:16:51.734 [2024-12-13 23:50:22.222391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.222459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.222469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:51.734 [2024-12-13 23:50:22.222476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:51.734 [2024-12-13 23:50:22.222497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.228674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.228701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.734 [2024-12-13 23:50:22.228709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:16:51.734 [2024-12-13 23:50:22.228717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.228784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.228793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.734 [2024-12-13 23:50:22.228800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:51.734 [2024-12-13 23:50:22.228807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.228828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.228837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:51.734 [2024-12-13 23:50:22.228843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.734 [2024-12-13 23:50:22.228851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.228874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:51.734 [2024-12-13 23:50:22.232036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.232058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.734 [2024-12-13 23:50:22.232067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:16:51.734 [2024-12-13 23:50:22.232073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.232107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.232113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:51.734 [2024-12-13 23:50:22.232121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:51.734 [2024-12-13 23:50:22.232129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.232147] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:51.734 [2024-12-13 23:50:22.232162] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:51.734 [2024-12-13 23:50:22.232191] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:51.734 [2024-12-13 23:50:22.232204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:51.734 [2024-12-13 23:50:22.232265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:51.734 [2024-12-13 23:50:22.232273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:51.734 [2024-12-13 23:50:22.232286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:51.734 [2024-12-13 23:50:22.232294] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232302] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232308] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:51.734 [2024-12-13 23:50:22.232315] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:51.734 [2024-12-13 23:50:22.232322] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:51.734 [2024-12-13 23:50:22.232330] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:51.734 [2024-12-13 23:50:22.232336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.232343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:51.734 [2024-12-13 23:50:22.232349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:51.734 [2024-12-13 23:50:22.232355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.232407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.734 [2024-12-13 23:50:22.232416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:51.734 [2024-12-13 23:50:22.232421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:51.734 [2024-12-13 23:50:22.232428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.734 [2024-12-13 23:50:22.232506] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:51.734 [2024-12-13 23:50:22.232518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:51.734 [2024-12-13 23:50:22.232525] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:51.734 [2024-12-13 23:50:22.232546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:51.734 [2024-12-13 23:50:22.232567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.734 [2024-12-13 23:50:22.232581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:51.734 [2024-12-13 23:50:22.232589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:51.734 [2024-12-13 23:50:22.232595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.734 [2024-12-13 23:50:22.232603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:51.734 [2024-12-13 23:50:22.232608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:51.734 [2024-12-13 23:50:22.232614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:51.734 [2024-12-13 23:50:22.232626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:51.734 [2024-12-13 23:50:22.232631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:51.734 [2024-12-13 23:50:22.232643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:51.734 [2024-12-13 23:50:22.232650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:51.734 [2024-12-13 23:50:22.232664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:51.734 [2024-12-13 23:50:22.232686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:51.734 [2024-12-13 23:50:22.232693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:51.734 [2024-12-13 23:50:22.232698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:51.735 [2024-12-13 23:50:22.232704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:51.735 [2024-12-13 23:50:22.232709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:51.735 [2024-12-13 23:50:22.232716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:51.735 [2024-12-13 23:50:22.232721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:51.735 [2024-12-13 23:50:22.232728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:51.735 [2024-12-13 23:50:22.232733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:51.735 [2024-12-13 23:50:22.232740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:51.735 [2024-12-13 23:50:22.232744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.735 [2024-12-13 23:50:22.232750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:51.735 [2024-12-13 23:50:22.232755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:51.735 [2024-12-13 23:50:22.232763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.735 [2024-12-13 23:50:22.232768] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:51.735 [2024-12-13 23:50:22.232778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:51.735 [2024-12-13 23:50:22.232785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.735 [2024-12-13 23:50:22.232792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.735 [2024-12-13 23:50:22.232799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:51.735 [2024-12-13 23:50:22.232806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:51.735 [2024-12-13 23:50:22.232812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:51.735 [2024-12-13 23:50:22.232819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:51.735 [2024-12-13 23:50:22.232824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:51.735 [2024-12-13 23:50:22.232831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:51.735 [2024-12-13 23:50:22.232838] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:51.735 [2024-12-13 23:50:22.232847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.735 [2024-12-13 23:50:22.232854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:51.735 [2024-12-13 23:50:22.232861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:51.735 [2024-12-13 23:50:22.232867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:51.735 [2024-12-13 23:50:22.232876] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:51.735 [2024-12-13 23:50:22.232882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:51.735 [2024-12-13 23:50:22.232889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:51.735 [2024-12-13 23:50:22.232895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:51.735 [2024-12-13 23:50:22.232902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:51.735 [2024-12-13 23:50:22.232908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:51.735 [2024-12-13 23:50:22.232915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:51.735 [2024-12-13 23:50:22.232920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:51.735 [2024-12-13 23:50:22.232927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:51.735 [2024-12-13 23:50:22.232932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:51.735 [2024-12-13 23:50:22.232939] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:51.735 [2024-12-13 23:50:22.232946] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.735 [2024-12-13 23:50:22.232953] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:51.735 [2024-12-13 23:50:22.232959] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:51.735 [2024-12-13 23:50:22.232966] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:51.735 [2024-12-13 23:50:22.232971] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:51.735 [2024-12-13 23:50:22.232980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.232986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:51.735 [2024-12-13 23:50:22.232994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:16:51.735 [2024-12-13 23:50:22.233004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.246866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.246893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.735 [2024-12-13 23:50:22.246905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.813 ms 00:16:51.735 [2024-12-13 23:50:22.246915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.247007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.247015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:51.735 [2024-12-13 23:50:22.247024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:51.735 [2024-12-13 23:50:22.247030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.273740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.273765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.735 [2024-12-13 23:50:22.273776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.689 ms 00:16:51.735 [2024-12-13 23:50:22.273784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.273831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.273841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.735 [2024-12-13 23:50:22.273849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:51.735 [2024-12-13 23:50:22.273855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.274234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.274252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.735 [2024-12-13 23:50:22.274263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:16:51.735 [2024-12-13 23:50:22.274270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.274371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.274378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.735 [2024-12-13 23:50:22.274388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:51.735 [2024-12-13 23:50:22.274394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.288154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.288177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.735 [2024-12-13 23:50:22.288188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.741 ms 00:16:51.735 [2024-12-13 23:50:22.288194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.298738] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:51.735 [2024-12-13 23:50:22.298763] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:51.735 [2024-12-13 23:50:22.298773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.298781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:51.735 [2024-12-13 23:50:22.298790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.496 ms 00:16:51.735 [2024-12-13 23:50:22.298795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.317423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.317450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:51.735 [2024-12-13 23:50:22.317460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.572 ms 00:16:51.735 [2024-12-13 23:50:22.317467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.326555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.326584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:51.735 [2024-12-13 23:50:22.326593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.027 ms 00:16:51.735 [2024-12-13 23:50:22.326599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.335701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.335725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:51.735 [2024-12-13 23:50:22.335736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.060 ms 00:16:51.735 [2024-12-13 23:50:22.335742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.336019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.336029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.735 [2024-12-13 23:50:22.336039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:16:51.735 [2024-12-13 23:50:22.336045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.385159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.385187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:51.735 [2024-12-13 23:50:22.385200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.095 ms 00:16:51.735 [2024-12-13 23:50:22.385207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.735 [2024-12-13 23:50:22.393196] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.735 [2024-12-13 23:50:22.407489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.735 [2024-12-13 23:50:22.407522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.736 [2024-12-13 23:50:22.407531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.226 ms 00:16:51.736 [2024-12-13 23:50:22.407540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.407600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.407620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:51.736 [2024-12-13 23:50:22.407627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.736 [2024-12-13 23:50:22.407637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.407684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.407693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.736 [2024-12-13 23:50:22.407699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:51.736 [2024-12-13 23:50:22.407707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.408735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.408763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:51.736 [2024-12-13 23:50:22.408771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:16:51.736 [2024-12-13 23:50:22.408779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.408807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.408817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.736 [2024-12-13 23:50:22.408823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.736 [2024-12-13 23:50:22.408832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.408864] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:51.736 [2024-12-13 23:50:22.408874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.408880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:51.736 [2024-12-13 23:50:22.408888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:51.736 [2024-12-13 23:50:22.408894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.428190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.428216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.736 [2024-12-13 23:50:22.428227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.273 ms 00:16:51.736 [2024-12-13 23:50:22.428233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.428307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.736 [2024-12-13 23:50:22.428315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.736 [2024-12-13 23:50:22.428323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:51.736 [2024-12-13 23:50:22.428331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.736 [2024-12-13 23:50:22.429152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.736 [2024-12-13 23:50:22.431606] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.529 ms, result 0 00:16:51.736 [2024-12-13 23:50:22.433653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.736 Some configs were skipped because the RPC state that can call them passed over. 00:16:51.996 23:50:22 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:51.996 [2024-12-13 23:50:22.651916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.996 [2024-12-13 23:50:22.651956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:51.996 [2024-12-13 23:50:22.651966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.355 ms 00:16:51.996 [2024-12-13 23:50:22.651974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.996 [2024-12-13 23:50:22.652004] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.444 ms, result 0 00:16:51.996 true 00:16:51.996 23:50:22 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:52.257 [2024-12-13 23:50:22.854694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.257 [2024-12-13 23:50:22.854724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:52.257 [2024-12-13 23:50:22.854735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:16:52.257 [2024-12-13 23:50:22.854741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.257 [2024-12-13 23:50:22.854772] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.576 ms, result 0 00:16:52.257 true 00:16:52.257 23:50:22 -- ftl/trim.sh@81 -- # killprocess 72444 00:16:52.257 23:50:22 -- common/autotest_common.sh@936 -- # '[' -z 72444 ']' 00:16:52.257 23:50:22 -- common/autotest_common.sh@940 -- # kill -0 72444 00:16:52.257 23:50:22 -- common/autotest_common.sh@941 -- # uname 00:16:52.257 23:50:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.257 23:50:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72444 00:16:52.257 killing process with pid 72444 00:16:52.257 23:50:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:52.257 23:50:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:52.257 23:50:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72444' 00:16:52.257 23:50:22 -- common/autotest_common.sh@955 -- # kill 72444 00:16:52.257 23:50:22 -- common/autotest_common.sh@960 -- # wait 72444 00:16:52.833 [2024-12-13 23:50:23.466821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.466873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:52.833 [2024-12-13 23:50:23.466885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:52.833 [2024-12-13 23:50:23.466893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.466915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:52.833 [2024-12-13 23:50:23.469086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.469112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:52.833 [2024-12-13 23:50:23.469125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:16:52.833 [2024-12-13 23:50:23.469131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.469400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.469409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:52.833 [2024-12-13 23:50:23.469418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:16:52.833 [2024-12-13 23:50:23.469424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.473052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.473078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:52.833 [2024-12-13 23:50:23.473090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.611 ms 00:16:52.833 [2024-12-13 23:50:23.473095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.478399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.478432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:52.833 [2024-12-13 23:50:23.478442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.271 ms 00:16:52.833 [2024-12-13 23:50:23.478449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.486744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.486768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:52.833 [2024-12-13 23:50:23.486779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.231 ms 00:16:52.833 [2024-12-13 23:50:23.486785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.493850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.493879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:52.833 [2024-12-13 23:50:23.493889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.031 ms 00:16:52.833 [2024-12-13 23:50:23.493895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.494010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.494018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:52.833 [2024-12-13 23:50:23.494027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:52.833 [2024-12-13 23:50:23.494033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.502795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.502819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:52.833 [2024-12-13 23:50:23.502827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.744 ms 00:16:52.833 [2024-12-13 23:50:23.502833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.511103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.511126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:52.833 [2024-12-13 23:50:23.511139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.238 ms 00:16:52.833 [2024-12-13 23:50:23.511144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.518828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.518851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:52.833 [2024-12-13 23:50:23.518860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.653 ms 00:16:52.833 [2024-12-13 23:50:23.518865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.526387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.833 [2024-12-13 23:50:23.526410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:52.833 [2024-12-13 23:50:23.526419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.461 ms 00:16:52.833 [2024-12-13 23:50:23.526424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.833 [2024-12-13 23:50:23.526453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:52.833 [2024-12-13 23:50:23.526465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:52.833 [2024-12-13 23:50:23.526554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.526999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:52.834 [2024-12-13 23:50:23.527158] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:52.834 [2024-12-13 23:50:23.527167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:16:52.834 [2024-12-13 23:50:23.527173] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:52.834 [2024-12-13 23:50:23.527180] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:52.835 [2024-12-13 23:50:23.527186] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:52.835 [2024-12-13 23:50:23.527195] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:52.835 [2024-12-13 23:50:23.527201] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:52.835 [2024-12-13 23:50:23.527208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:52.835 [2024-12-13 23:50:23.527214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:52.835 [2024-12-13 23:50:23.527220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:52.835 [2024-12-13 23:50:23.527225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:52.835 [2024-12-13 23:50:23.527232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.835 [2024-12-13 23:50:23.527239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:52.835 [2024-12-13 23:50:23.527247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:16:52.835 [2024-12-13 23:50:23.527255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.835 [2024-12-13 23:50:23.537495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.835 [2024-12-13 23:50:23.537518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:52.835 [2024-12-13 23:50:23.537529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.222 ms 00:16:52.835 [2024-12-13 23:50:23.537536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.835 [2024-12-13 23:50:23.537714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.835 [2024-12-13 23:50:23.537723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:52.835 [2024-12-13 23:50:23.537733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:52.835 [2024-12-13 23:50:23.537739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.574802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.574827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.096 [2024-12-13 23:50:23.574837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.574843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.574911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.574918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.096 [2024-12-13 23:50:23.574928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.574934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.574971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.574979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.096 [2024-12-13 23:50:23.574990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.574998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.575015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.575022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.096 [2024-12-13 23:50:23.575029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.575037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.639410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.639447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.096 [2024-12-13 23:50:23.639459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.639466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.663648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.663674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.096 [2024-12-13 23:50:23.663684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.663693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.663740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.663749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.096 [2024-12-13 23:50:23.663758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.663765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.663794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.663801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.096 [2024-12-13 23:50:23.663808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.663815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.663896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.663906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.096 [2024-12-13 23:50:23.663913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.663919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.663948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.663955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:53.096 [2024-12-13 23:50:23.663963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.663969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.664011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.664018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.096 [2024-12-13 23:50:23.664027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.664034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.096 [2024-12-13 23:50:23.664078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.096 [2024-12-13 23:50:23.664086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.096 [2024-12-13 23:50:23.664095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.096 [2024-12-13 23:50:23.664100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.097 [2024-12-13 23:50:23.664229] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 197.387 ms, result 0 00:16:53.671 23:50:24 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:53.671 23:50:24 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:53.932 [2024-12-13 23:50:24.410703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:53.932 [2024-12-13 23:50:24.410824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72491 ] 00:16:53.932 [2024-12-13 23:50:24.560970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.194 [2024-12-13 23:50:24.728184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.455 [2024-12-13 23:50:24.953959] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:54.456 [2024-12-13 23:50:24.954010] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:54.456 [2024-12-13 23:50:25.102958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.102996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:54.456 [2024-12-13 23:50:25.103008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:54.456 [2024-12-13 23:50:25.103014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.105176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.105207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.456 [2024-12-13 23:50:25.105215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:16:54.456 [2024-12-13 23:50:25.105221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.105279] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:54.456 [2024-12-13 23:50:25.105856] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:54.456 [2024-12-13 23:50:25.105876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.105883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.456 [2024-12-13 23:50:25.105890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:16:54.456 [2024-12-13 23:50:25.105896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.107171] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:54.456 [2024-12-13 23:50:25.117816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.117844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:54.456 [2024-12-13 23:50:25.117853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.646 ms 00:16:54.456 [2024-12-13 23:50:25.117860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.117932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.117941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:54.456 [2024-12-13 23:50:25.117948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:54.456 [2024-12-13 23:50:25.117954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.124158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.124183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.456 [2024-12-13 23:50:25.124190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:16:54.456 [2024-12-13 23:50:25.124199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.124279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.124286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.456 [2024-12-13 23:50:25.124293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:54.456 [2024-12-13 23:50:25.124299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.124318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.124325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:54.456 [2024-12-13 23:50:25.124331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.456 [2024-12-13 23:50:25.124338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.124364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:54.456 [2024-12-13 23:50:25.127498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.127521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.456 [2024-12-13 23:50:25.127528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:16:54.456 [2024-12-13 23:50:25.127536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.127571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.127577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:54.456 [2024-12-13 23:50:25.127584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:54.456 [2024-12-13 23:50:25.127590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.127606] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:54.456 [2024-12-13 23:50:25.127629] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:54.456 [2024-12-13 23:50:25.127656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:54.456 [2024-12-13 23:50:25.127671] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:54.456 [2024-12-13 23:50:25.127730] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:54.456 [2024-12-13 23:50:25.127739] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:54.456 [2024-12-13 23:50:25.127746] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:54.456 [2024-12-13 23:50:25.127754] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:54.456 [2024-12-13 23:50:25.127761] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:54.456 [2024-12-13 23:50:25.127767] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:54.456 [2024-12-13 23:50:25.127773] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:54.456 [2024-12-13 23:50:25.127778] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:54.456 [2024-12-13 23:50:25.127786] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:54.456 [2024-12-13 23:50:25.127791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.127797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:54.456 [2024-12-13 23:50:25.127803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:16:54.456 [2024-12-13 23:50:25.127809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.127859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.456 [2024-12-13 23:50:25.127866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:54.456 [2024-12-13 23:50:25.127872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:54.456 [2024-12-13 23:50:25.127877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.456 [2024-12-13 23:50:25.127943] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:54.456 [2024-12-13 23:50:25.127952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:54.456 [2024-12-13 23:50:25.127959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.456 [2024-12-13 23:50:25.127966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.456 [2024-12-13 23:50:25.127971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:54.456 [2024-12-13 23:50:25.127977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:54.456 [2024-12-13 23:50:25.127983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:54.456 [2024-12-13 23:50:25.127988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:54.456 [2024-12-13 23:50:25.127993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.456 [2024-12-13 23:50:25.128006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:54.456 [2024-12-13 23:50:25.128012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:54.456 [2024-12-13 23:50:25.128017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.456 [2024-12-13 23:50:25.128022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:54.456 [2024-12-13 23:50:25.128033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:54.456 [2024-12-13 23:50:25.128038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:54.456 [2024-12-13 23:50:25.128048] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:54.456 [2024-12-13 23:50:25.128052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:54.456 [2024-12-13 23:50:25.128063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:54.456 [2024-12-13 23:50:25.128069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:54.456 [2024-12-13 23:50:25.128074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:54.456 [2024-12-13 23:50:25.128080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.456 [2024-12-13 23:50:25.128090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:54.456 [2024-12-13 23:50:25.128094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.456 [2024-12-13 23:50:25.128105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:54.456 [2024-12-13 23:50:25.128110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.456 [2024-12-13 23:50:25.128119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:54.456 [2024-12-13 23:50:25.128124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.456 [2024-12-13 23:50:25.128135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:54.456 [2024-12-13 23:50:25.128140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:54.456 [2024-12-13 23:50:25.128145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.456 [2024-12-13 23:50:25.128150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:54.456 [2024-12-13 23:50:25.128154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:54.456 [2024-12-13 23:50:25.128159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.456 [2024-12-13 23:50:25.128163] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:54.456 [2024-12-13 23:50:25.128171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:54.457 [2024-12-13 23:50:25.128177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.457 [2024-12-13 23:50:25.128186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.457 [2024-12-13 23:50:25.128192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:54.457 [2024-12-13 23:50:25.128197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:54.457 [2024-12-13 23:50:25.128202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:54.457 [2024-12-13 23:50:25.128207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:54.457 [2024-12-13 23:50:25.128212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:54.457 [2024-12-13 23:50:25.128217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:54.457 [2024-12-13 23:50:25.128223] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:54.457 [2024-12-13 23:50:25.128231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.457 [2024-12-13 23:50:25.128237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:54.457 [2024-12-13 23:50:25.128242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:54.457 [2024-12-13 23:50:25.128247] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:54.457 [2024-12-13 23:50:25.128253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:54.457 [2024-12-13 23:50:25.128258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:54.457 [2024-12-13 23:50:25.128265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:54.457 [2024-12-13 23:50:25.128270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:54.457 [2024-12-13 23:50:25.128276] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:54.457 [2024-12-13 23:50:25.128281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:54.457 [2024-12-13 23:50:25.128286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:54.457 [2024-12-13 23:50:25.128291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:54.457 [2024-12-13 23:50:25.128297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:54.457 [2024-12-13 23:50:25.128302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:54.457 [2024-12-13 23:50:25.128309] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:54.457 [2024-12-13 23:50:25.128318] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.457 [2024-12-13 23:50:25.128325] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:54.457 [2024-12-13 23:50:25.128330] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:54.457 [2024-12-13 23:50:25.128336] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:54.457 [2024-12-13 23:50:25.128342] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:54.457 [2024-12-13 23:50:25.128347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.128355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:54.457 [2024-12-13 23:50:25.128365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:16:54.457 [2024-12-13 23:50:25.128372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.457 [2024-12-13 23:50:25.142128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.142158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.457 [2024-12-13 23:50:25.142166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.713 ms 00:16:54.457 [2024-12-13 23:50:25.142174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.457 [2024-12-13 23:50:25.142268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.142276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:54.457 [2024-12-13 23:50:25.142283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:54.457 [2024-12-13 23:50:25.142290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.457 [2024-12-13 23:50:25.181190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.181222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.457 [2024-12-13 23:50:25.181233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.882 ms 00:16:54.457 [2024-12-13 23:50:25.181241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.457 [2024-12-13 23:50:25.181299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.181308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.457 [2024-12-13 23:50:25.181318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:54.457 [2024-12-13 23:50:25.181323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.457 [2024-12-13 23:50:25.181723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.181744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.457 [2024-12-13 23:50:25.181752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:16:54.457 [2024-12-13 23:50:25.181759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.457 [2024-12-13 23:50:25.181862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.457 [2024-12-13 23:50:25.181876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.457 [2024-12-13 23:50:25.181883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:54.457 [2024-12-13 23:50:25.181890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.718 [2024-12-13 23:50:25.194878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.718 [2024-12-13 23:50:25.194902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.719 [2024-12-13 23:50:25.194911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.968 ms 00:16:54.719 [2024-12-13 23:50:25.194919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.205870] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:54.719 [2024-12-13 23:50:25.205896] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:54.719 [2024-12-13 23:50:25.205906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.205913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:54.719 [2024-12-13 23:50:25.205920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.908 ms 00:16:54.719 [2024-12-13 23:50:25.205926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.224780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.224809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:54.719 [2024-12-13 23:50:25.224818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.801 ms 00:16:54.719 [2024-12-13 23:50:25.224825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.234069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.234094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:54.719 [2024-12-13 23:50:25.234108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.192 ms 00:16:54.719 [2024-12-13 23:50:25.234114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.243111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.243135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:54.719 [2024-12-13 23:50:25.243143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.957 ms 00:16:54.719 [2024-12-13 23:50:25.243150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.243429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.243444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:54.719 [2024-12-13 23:50:25.243452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:16:54.719 [2024-12-13 23:50:25.243460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.292416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.292448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:54.719 [2024-12-13 23:50:25.292457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.939 ms 00:16:54.719 [2024-12-13 23:50:25.292467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.300424] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:54.719 [2024-12-13 23:50:25.315146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.315173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:54.719 [2024-12-13 23:50:25.315182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.601 ms 00:16:54.719 [2024-12-13 23:50:25.315189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.315249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.315257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:54.719 [2024-12-13 23:50:25.315267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.719 [2024-12-13 23:50:25.315274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.315319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.315325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:54.719 [2024-12-13 23:50:25.315331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:54.719 [2024-12-13 23:50:25.315338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.316380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.316408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:54.719 [2024-12-13 23:50:25.316415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:16:54.719 [2024-12-13 23:50:25.316421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.316449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.316457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:54.719 [2024-12-13 23:50:25.316464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.719 [2024-12-13 23:50:25.316470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.316514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:54.719 [2024-12-13 23:50:25.316523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.316529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:54.719 [2024-12-13 23:50:25.316536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:54.719 [2024-12-13 23:50:25.316542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.335630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.335658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:54.719 [2024-12-13 23:50:25.335666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.069 ms 00:16:54.719 [2024-12-13 23:50:25.335673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.335744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.719 [2024-12-13 23:50:25.335752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:54.719 [2024-12-13 23:50:25.335760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:54.719 [2024-12-13 23:50:25.335766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.719 [2024-12-13 23:50:25.336531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.719 [2024-12-13 23:50:25.338954] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 233.306 ms, result 0 00:16:54.719 [2024-12-13 23:50:25.340105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.719 [2024-12-13 23:50:25.351036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.664  [2024-12-13T23:50:27.786Z] Copying: 14/256 [MB] (14 MBps) [2024-12-13T23:50:28.359Z] Copying: 25/256 [MB] (11 MBps) [2024-12-13T23:50:29.749Z] Copying: 37/256 [MB] (11 MBps) [2024-12-13T23:50:30.693Z] Copying: 48/256 [MB] (11 MBps) [2024-12-13T23:50:31.638Z] Copying: 60/256 [MB] (11 MBps) [2024-12-13T23:50:32.583Z] Copying: 71/256 [MB] (11 MBps) [2024-12-13T23:50:33.528Z] Copying: 83/256 [MB] (11 MBps) [2024-12-13T23:50:34.474Z] Copying: 94/256 [MB] (11 MBps) [2024-12-13T23:50:35.419Z] Copying: 106/256 [MB] (11 MBps) [2024-12-13T23:50:36.388Z] Copying: 117/256 [MB] (11 MBps) [2024-12-13T23:50:37.776Z] Copying: 128/256 [MB] (11 MBps) [2024-12-13T23:50:38.721Z] Copying: 140/256 [MB] (11 MBps) [2024-12-13T23:50:39.665Z] Copying: 152/256 [MB] (11 MBps) [2024-12-13T23:50:40.610Z] Copying: 163/256 [MB] (11 MBps) [2024-12-13T23:50:41.555Z] Copying: 175/256 [MB] (11 MBps) [2024-12-13T23:50:42.501Z] Copying: 186/256 [MB] (11 MBps) [2024-12-13T23:50:43.446Z] Copying: 197/256 [MB] (11 MBps) [2024-12-13T23:50:44.391Z] Copying: 209/256 [MB] (11 MBps) [2024-12-13T23:50:45.780Z] Copying: 221/256 [MB] (11 MBps) [2024-12-13T23:50:46.724Z] Copying: 232/256 [MB] (11 MBps) [2024-12-13T23:50:47.670Z] Copying: 244/256 [MB] (11 MBps) [2024-12-13T23:50:47.670Z] Copying: 256/256 [MB] (average 11 MBps)[2024-12-13 23:50:47.343543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.938 [2024-12-13 23:50:47.350945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.350982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.938 [2024-12-13 23:50:47.350994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:16.938 [2024-12-13 23:50:47.351000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.351018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:16.938 [2024-12-13 23:50:47.353187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.353212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.938 [2024-12-13 23:50:47.353221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:17:16.938 [2024-12-13 23:50:47.353227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.353442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.353451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.938 [2024-12-13 23:50:47.353458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:16.938 [2024-12-13 23:50:47.353466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.356254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.356271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.938 [2024-12-13 23:50:47.356279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:17:16.938 [2024-12-13 23:50:47.356285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.361501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.361525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:16.938 [2024-12-13 23:50:47.361533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:17:16.938 [2024-12-13 23:50:47.361540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.380202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.380230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:16.938 [2024-12-13 23:50:47.380239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.622 ms 00:17:16.938 [2024-12-13 23:50:47.380245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.392989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.393015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:16.938 [2024-12-13 23:50:47.393025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:17:16.938 [2024-12-13 23:50:47.393032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.393138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.393146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:16.938 [2024-12-13 23:50:47.393153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:16.938 [2024-12-13 23:50:47.393160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.412153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.412178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:16.938 [2024-12-13 23:50:47.412186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.980 ms 00:17:16.938 [2024-12-13 23:50:47.412192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.430511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.430535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:16.938 [2024-12-13 23:50:47.430543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.283 ms 00:17:16.938 [2024-12-13 23:50:47.430548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.448343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.448368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:16.938 [2024-12-13 23:50:47.448376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:17:16.938 [2024-12-13 23:50:47.448381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.466120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.938 [2024-12-13 23:50:47.466145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:16.938 [2024-12-13 23:50:47.466153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.683 ms 00:17:16.938 [2024-12-13 23:50:47.466158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.938 [2024-12-13 23:50:47.466193] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:16.938 [2024-12-13 23:50:47.466205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:16.938 [2024-12-13 23:50:47.466248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:16.939 [2024-12-13 23:50:47.466798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:16.940 [2024-12-13 23:50:47.466810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:16.940 [2024-12-13 23:50:47.466816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:17:16.940 [2024-12-13 23:50:47.466822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:16.940 [2024-12-13 23:50:47.466828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:16.940 [2024-12-13 23:50:47.466833] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:16.940 [2024-12-13 23:50:47.466839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:16.940 [2024-12-13 23:50:47.466844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:16.940 [2024-12-13 23:50:47.466853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:16.940 [2024-12-13 23:50:47.466859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:16.940 [2024-12-13 23:50:47.466863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:16.940 [2024-12-13 23:50:47.466869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:16.940 [2024-12-13 23:50:47.466875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.940 [2024-12-13 23:50:47.466881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:16.940 [2024-12-13 23:50:47.466887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:17:16.940 [2024-12-13 23:50:47.466893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.477011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.940 [2024-12-13 23:50:47.477035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:16.940 [2024-12-13 23:50:47.477047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.105 ms 00:17:16.940 [2024-12-13 23:50:47.477053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.477225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.940 [2024-12-13 23:50:47.477233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:16.940 [2024-12-13 23:50:47.477239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:17:16.940 [2024-12-13 23:50:47.477245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.508604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.508633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.940 [2024-12-13 23:50:47.508646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.508652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.508721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.508728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.940 [2024-12-13 23:50:47.508734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.508740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.508774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.508780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.940 [2024-12-13 23:50:47.508787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.508795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.508809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.508815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.940 [2024-12-13 23:50:47.508821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.508826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.569405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.569437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.940 [2024-12-13 23:50:47.569450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.569456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.940 [2024-12-13 23:50:47.593225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.940 [2024-12-13 23:50:47.593291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.940 [2024-12-13 23:50:47.593338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.940 [2024-12-13 23:50:47.593437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:16.940 [2024-12-13 23:50:47.593498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.940 [2024-12-13 23:50:47.593551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.940 [2024-12-13 23:50:47.593609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.940 [2024-12-13 23:50:47.593616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.940 [2024-12-13 23:50:47.593623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.940 [2024-12-13 23:50:47.593749] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 242.791 ms, result 0 00:17:17.884 00:17:17.884 00:17:17.884 23:50:48 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:17.885 23:50:48 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:18.146 23:50:48 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.407 [2024-12-13 23:50:48.930124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.407 [2024-12-13 23:50:48.930264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72750 ] 00:17:18.407 [2024-12-13 23:50:49.079652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.669 [2024-12-13 23:50:49.261100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.930 [2024-12-13 23:50:49.487974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.930 [2024-12-13 23:50:49.488031] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.930 [2024-12-13 23:50:49.637392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.637434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.930 [2024-12-13 23:50:49.637445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:18.930 [2024-12-13 23:50:49.637451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.639707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.639739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.930 [2024-12-13 23:50:49.639747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:17:18.930 [2024-12-13 23:50:49.639753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.639811] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.930 [2024-12-13 23:50:49.640373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.930 [2024-12-13 23:50:49.640396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.640402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.930 [2024-12-13 23:50:49.640409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:17:18.930 [2024-12-13 23:50:49.640415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.641795] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:18.930 [2024-12-13 23:50:49.652177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.652207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:18.930 [2024-12-13 23:50:49.652216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.383 ms 00:17:18.930 [2024-12-13 23:50:49.652223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.652295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.652304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:18.930 [2024-12-13 23:50:49.652311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:18.930 [2024-12-13 23:50:49.652317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.658639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.658665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.930 [2024-12-13 23:50:49.658672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.291 ms 00:17:18.930 [2024-12-13 23:50:49.658681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.658758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.658766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.930 [2024-12-13 23:50:49.658773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:18.930 [2024-12-13 23:50:49.658779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.658798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.930 [2024-12-13 23:50:49.658804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.930 [2024-12-13 23:50:49.658811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.930 [2024-12-13 23:50:49.658816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.930 [2024-12-13 23:50:49.658842] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:19.193 [2024-12-13 23:50:49.662041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.193 [2024-12-13 23:50:49.662067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.193 [2024-12-13 23:50:49.662075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:17:19.193 [2024-12-13 23:50:49.662083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.193 [2024-12-13 23:50:49.662116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.193 [2024-12-13 23:50:49.662123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.193 [2024-12-13 23:50:49.662130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:19.193 [2024-12-13 23:50:49.662136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.193 [2024-12-13 23:50:49.662150] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.193 [2024-12-13 23:50:49.662167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:19.193 [2024-12-13 23:50:49.662193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.193 [2024-12-13 23:50:49.662207] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:19.193 [2024-12-13 23:50:49.662267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:19.193 [2024-12-13 23:50:49.662276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.193 [2024-12-13 23:50:49.662284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:19.193 [2024-12-13 23:50:49.662292] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662299] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662305] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:19.193 [2024-12-13 23:50:49.662310] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.193 [2024-12-13 23:50:49.662316] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:19.193 [2024-12-13 23:50:49.662324] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:19.193 [2024-12-13 23:50:49.662331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.193 [2024-12-13 23:50:49.662337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.193 [2024-12-13 23:50:49.662342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:17:19.193 [2024-12-13 23:50:49.662348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.193 [2024-12-13 23:50:49.662407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.193 [2024-12-13 23:50:49.662415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.193 [2024-12-13 23:50:49.662420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:19.193 [2024-12-13 23:50:49.662426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.193 [2024-12-13 23:50:49.662494] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.193 [2024-12-13 23:50:49.662503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.193 [2024-12-13 23:50:49.662510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.193 [2024-12-13 23:50:49.662528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.193 [2024-12-13 23:50:49.662545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.193 [2024-12-13 23:50:49.662556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.193 [2024-12-13 23:50:49.662561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:19.193 [2024-12-13 23:50:49.662566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.193 [2024-12-13 23:50:49.662572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.193 [2024-12-13 23:50:49.662582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:19.193 [2024-12-13 23:50:49.662587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.193 [2024-12-13 23:50:49.662597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:19.193 [2024-12-13 23:50:49.662602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:19.193 [2024-12-13 23:50:49.662613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:19.193 [2024-12-13 23:50:49.662618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.193 [2024-12-13 23:50:49.662629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.193 [2024-12-13 23:50:49.662643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.193 [2024-12-13 23:50:49.662659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.193 [2024-12-13 23:50:49.662674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.193 [2024-12-13 23:50:49.662688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.193 [2024-12-13 23:50:49.662698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.193 [2024-12-13 23:50:49.662703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:19.193 [2024-12-13 23:50:49.662708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.193 [2024-12-13 23:50:49.662712] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.193 [2024-12-13 23:50:49.662718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.193 [2024-12-13 23:50:49.662725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.193 [2024-12-13 23:50:49.662739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.193 [2024-12-13 23:50:49.662744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.193 [2024-12-13 23:50:49.662750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.193 [2024-12-13 23:50:49.662756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.193 [2024-12-13 23:50:49.662760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.193 [2024-12-13 23:50:49.662765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.193 [2024-12-13 23:50:49.662772] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.193 [2024-12-13 23:50:49.662779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.193 [2024-12-13 23:50:49.662785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:19.193 [2024-12-13 23:50:49.662791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:19.193 [2024-12-13 23:50:49.662796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:19.193 [2024-12-13 23:50:49.662802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:19.193 [2024-12-13 23:50:49.662807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:19.194 [2024-12-13 23:50:49.662813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:19.194 [2024-12-13 23:50:49.662818] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:19.194 [2024-12-13 23:50:49.662823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:19.194 [2024-12-13 23:50:49.662828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:19.194 [2024-12-13 23:50:49.662834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:19.194 [2024-12-13 23:50:49.662839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:19.194 [2024-12-13 23:50:49.662845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:19.194 [2024-12-13 23:50:49.662851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:19.194 [2024-12-13 23:50:49.662856] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.194 [2024-12-13 23:50:49.662866] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.194 [2024-12-13 23:50:49.662873] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.194 [2024-12-13 23:50:49.662878] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.194 [2024-12-13 23:50:49.662884] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.194 [2024-12-13 23:50:49.662889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.194 [2024-12-13 23:50:49.662895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.662900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.194 [2024-12-13 23:50:49.662906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:17:19.194 [2024-12-13 23:50:49.662914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.676927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.676959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.194 [2024-12-13 23:50:49.676969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.970 ms 00:17:19.194 [2024-12-13 23:50:49.676976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.677068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.677077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.194 [2024-12-13 23:50:49.677085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:19.194 [2024-12-13 23:50:49.677092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.719163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.719199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.194 [2024-12-13 23:50:49.719210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.053 ms 00:17:19.194 [2024-12-13 23:50:49.719218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.719276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.719285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.194 [2024-12-13 23:50:49.719296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:19.194 [2024-12-13 23:50:49.719303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.719734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.719755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.194 [2024-12-13 23:50:49.719763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:17:19.194 [2024-12-13 23:50:49.719769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.719872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.719886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.194 [2024-12-13 23:50:49.719893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:19.194 [2024-12-13 23:50:49.719899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.732959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.732986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.194 [2024-12-13 23:50:49.732994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.041 ms 00:17:19.194 [2024-12-13 23:50:49.733002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.743765] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:19.194 [2024-12-13 23:50:49.743793] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.194 [2024-12-13 23:50:49.743802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.743809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.194 [2024-12-13 23:50:49.743816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.723 ms 00:17:19.194 [2024-12-13 23:50:49.743822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.762711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.762745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.194 [2024-12-13 23:50:49.762753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.833 ms 00:17:19.194 [2024-12-13 23:50:49.762759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.771694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.771722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.194 [2024-12-13 23:50:49.771736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.882 ms 00:17:19.194 [2024-12-13 23:50:49.771742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.780710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.780736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.194 [2024-12-13 23:50:49.780745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.927 ms 00:17:19.194 [2024-12-13 23:50:49.780751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.781029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.781045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.194 [2024-12-13 23:50:49.781052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:19.194 [2024-12-13 23:50:49.781060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.830505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.830541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.194 [2024-12-13 23:50:49.830551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.427 ms 00:17:19.194 [2024-12-13 23:50:49.830562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.838904] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.194 [2024-12-13 23:50:49.853998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.854029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.194 [2024-12-13 23:50:49.854041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:17:19.194 [2024-12-13 23:50:49.854047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.854108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.854117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.194 [2024-12-13 23:50:49.854127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:19.194 [2024-12-13 23:50:49.854133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.854176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.854182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.194 [2024-12-13 23:50:49.854188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:19.194 [2024-12-13 23:50:49.854195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.855233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.855262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:19.194 [2024-12-13 23:50:49.855270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:17:19.194 [2024-12-13 23:50:49.855276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.855304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.855313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.194 [2024-12-13 23:50:49.855319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.194 [2024-12-13 23:50:49.855325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.855354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.194 [2024-12-13 23:50:49.855362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.855368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.194 [2024-12-13 23:50:49.855374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:19.194 [2024-12-13 23:50:49.855379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.874224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.874253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.194 [2024-12-13 23:50:49.874263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.825 ms 00:17:19.194 [2024-12-13 23:50:49.874269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.874340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.194 [2024-12-13 23:50:49.874349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.194 [2024-12-13 23:50:49.874356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:19.194 [2024-12-13 23:50:49.874362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.194 [2024-12-13 23:50:49.875191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.195 [2024-12-13 23:50:49.877675] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.550 ms, result 0 00:17:19.195 [2024-12-13 23:50:49.878524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.195 [2024-12-13 23:50:49.889759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.768  [2024-12-13T23:50:50.500Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-12-13 23:50:50.249242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.768 [2024-12-13 23:50:50.255601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.255642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.768 [2024-12-13 23:50:50.255651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.768 [2024-12-13 23:50:50.255658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.255674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.768 [2024-12-13 23:50:50.257792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.257816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.768 [2024-12-13 23:50:50.257824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:17:19.768 [2024-12-13 23:50:50.257831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.259717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.259743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.768 [2024-12-13 23:50:50.259750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:17:19.768 [2024-12-13 23:50:50.259760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.262813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.262835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.768 [2024-12-13 23:50:50.262842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:17:19.768 [2024-12-13 23:50:50.262848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.267990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.268015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:19.768 [2024-12-13 23:50:50.268022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.122 ms 00:17:19.768 [2024-12-13 23:50:50.268032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.285761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.285788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.768 [2024-12-13 23:50:50.285796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.686 ms 00:17:19.768 [2024-12-13 23:50:50.285802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.297817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.297845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.768 [2024-12-13 23:50:50.297853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.979 ms 00:17:19.768 [2024-12-13 23:50:50.297860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.297962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.297970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.768 [2024-12-13 23:50:50.297976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:19.768 [2024-12-13 23:50:50.297982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.316139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.316172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:19.768 [2024-12-13 23:50:50.316180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.145 ms 00:17:19.768 [2024-12-13 23:50:50.316185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.334086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.334114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:19.768 [2024-12-13 23:50:50.334123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.856 ms 00:17:19.768 [2024-12-13 23:50:50.334128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.768 [2024-12-13 23:50:50.351345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.768 [2024-12-13 23:50:50.351372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.768 [2024-12-13 23:50:50.351380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.182 ms 00:17:19.769 [2024-12-13 23:50:50.351385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.769 [2024-12-13 23:50:50.368946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.769 [2024-12-13 23:50:50.368973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.769 [2024-12-13 23:50:50.368981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:17:19.769 [2024-12-13 23:50:50.368986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.769 [2024-12-13 23:50:50.369022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.769 [2024-12-13 23:50:50.369034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.769 [2024-12-13 23:50:50.369544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.770 [2024-12-13 23:50:50.369635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.770 [2024-12-13 23:50:50.369641] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:17:19.770 [2024-12-13 23:50:50.369648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.770 [2024-12-13 23:50:50.369654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.770 [2024-12-13 23:50:50.369660] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.770 [2024-12-13 23:50:50.369666] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.770 [2024-12-13 23:50:50.369674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.770 [2024-12-13 23:50:50.369680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.770 [2024-12-13 23:50:50.369685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.770 [2024-12-13 23:50:50.369690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.770 [2024-12-13 23:50:50.369695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.770 [2024-12-13 23:50:50.369700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.770 [2024-12-13 23:50:50.369706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.770 [2024-12-13 23:50:50.369713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:17:19.770 [2024-12-13 23:50:50.369719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.379220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.770 [2024-12-13 23:50:50.379245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.770 [2024-12-13 23:50:50.379257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.488 ms 00:17:19.770 [2024-12-13 23:50:50.379262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.379430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.770 [2024-12-13 23:50:50.379444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.770 [2024-12-13 23:50:50.379450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:19.770 [2024-12-13 23:50:50.379456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.410917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.410945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.770 [2024-12-13 23:50:50.410956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.410962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.411031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.411038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.770 [2024-12-13 23:50:50.411043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.411049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.411084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.411092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.770 [2024-12-13 23:50:50.411099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.411108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.411122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.411127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.770 [2024-12-13 23:50:50.411133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.411138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.470555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.470590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.770 [2024-12-13 23:50:50.470602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.470608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.770 [2024-12-13 23:50:50.494554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.770 [2024-12-13 23:50:50.494623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.770 [2024-12-13 23:50:50.494671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.770 [2024-12-13 23:50:50.494769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.770 [2024-12-13 23:50:50.494816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.770 [2024-12-13 23:50:50.494869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.494917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.770 [2024-12-13 23:50:50.494927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.770 [2024-12-13 23:50:50.494933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.770 [2024-12-13 23:50:50.494939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.770 [2024-12-13 23:50:50.495065] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 239.443 ms, result 0 00:17:20.714 00:17:20.714 00:17:20.714 23:50:51 -- ftl/trim.sh@93 -- # svcpid=72775 00:17:20.714 23:50:51 -- ftl/trim.sh@94 -- # waitforlisten 72775 00:17:20.714 23:50:51 -- common/autotest_common.sh@829 -- # '[' -z 72775 ']' 00:17:20.714 23:50:51 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:20.714 23:50:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.714 23:50:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.714 23:50:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.714 23:50:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.714 23:50:51 -- common/autotest_common.sh@10 -- # set +x 00:17:20.714 [2024-12-13 23:50:51.273019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.714 [2024-12-13 23:50:51.273134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72775 ] 00:17:20.714 [2024-12-13 23:50:51.422959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.975 [2024-12-13 23:50:51.608519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.975 [2024-12-13 23:50:51.608695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.362 23:50:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.362 23:50:52 -- common/autotest_common.sh@862 -- # return 0 00:17:22.362 23:50:52 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:22.362 [2024-12-13 23:50:52.956345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.362 [2024-12-13 23:50:52.956396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.625 [2024-12-13 23:50:53.121606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.121641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.625 [2024-12-13 23:50:53.121654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.625 [2024-12-13 23:50:53.121661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.123822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.123858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.625 [2024-12-13 23:50:53.123867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.145 ms 00:17:22.625 [2024-12-13 23:50:53.123873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.123936] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.625 [2024-12-13 23:50:53.124503] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.625 [2024-12-13 23:50:53.124527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.124534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.625 [2024-12-13 23:50:53.124543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:17:22.625 [2024-12-13 23:50:53.124548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.125853] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.625 [2024-12-13 23:50:53.136520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.136551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.625 [2024-12-13 23:50:53.136561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.672 ms 00:17:22.625 [2024-12-13 23:50:53.136568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.136637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.136648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.625 [2024-12-13 23:50:53.136655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:22.625 [2024-12-13 23:50:53.136662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.142861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.142893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.625 [2024-12-13 23:50:53.142900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:17:22.625 [2024-12-13 23:50:53.142909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.142975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.142984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.625 [2024-12-13 23:50:53.142990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:22.625 [2024-12-13 23:50:53.142997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.143018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.143027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.625 [2024-12-13 23:50:53.143034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.625 [2024-12-13 23:50:53.143042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.143066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.625 [2024-12-13 23:50:53.146261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.146285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.625 [2024-12-13 23:50:53.146293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:17:22.625 [2024-12-13 23:50:53.146299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.146334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.146341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.625 [2024-12-13 23:50:53.146349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:22.625 [2024-12-13 23:50:53.146356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.146374] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.625 [2024-12-13 23:50:53.146391] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:22.625 [2024-12-13 23:50:53.146420] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.625 [2024-12-13 23:50:53.146434] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:22.625 [2024-12-13 23:50:53.146509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:22.625 [2024-12-13 23:50:53.146518] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.625 [2024-12-13 23:50:53.146531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:22.625 [2024-12-13 23:50:53.146540] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.625 [2024-12-13 23:50:53.146548] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.625 [2024-12-13 23:50:53.146555] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.625 [2024-12-13 23:50:53.146562] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.625 [2024-12-13 23:50:53.146568] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:22.625 [2024-12-13 23:50:53.146576] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:22.625 [2024-12-13 23:50:53.146582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.146589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.625 [2024-12-13 23:50:53.146595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:17:22.625 [2024-12-13 23:50:53.146602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.146662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.625 [2024-12-13 23:50:53.146671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.625 [2024-12-13 23:50:53.146677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:22.625 [2024-12-13 23:50:53.146683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.625 [2024-12-13 23:50:53.146745] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.625 [2024-12-13 23:50:53.146755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.625 [2024-12-13 23:50:53.146762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.625 [2024-12-13 23:50:53.146771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.625 [2024-12-13 23:50:53.146778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.626 [2024-12-13 23:50:53.146786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.626 [2024-12-13 23:50:53.146801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.626 [2024-12-13 23:50:53.146807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.626 [2024-12-13 23:50:53.146822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.626 [2024-12-13 23:50:53.146829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.626 [2024-12-13 23:50:53.146835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.626 [2024-12-13 23:50:53.146842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.626 [2024-12-13 23:50:53.146847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:22.626 [2024-12-13 23:50:53.146854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.626 [2024-12-13 23:50:53.146866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:22.626 [2024-12-13 23:50:53.146871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:22.626 [2024-12-13 23:50:53.146884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:22.626 [2024-12-13 23:50:53.146890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:22.626 [2024-12-13 23:50:53.146896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.626 [2024-12-13 23:50:53.146904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.626 [2024-12-13 23:50:53.146920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.626 [2024-12-13 23:50:53.146925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.626 [2024-12-13 23:50:53.146936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.626 [2024-12-13 23:50:53.146942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.626 [2024-12-13 23:50:53.146954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.626 [2024-12-13 23:50:53.146959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.626 [2024-12-13 23:50:53.146971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.626 [2024-12-13 23:50:53.146977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.626 [2024-12-13 23:50:53.146982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.626 [2024-12-13 23:50:53.146988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.626 [2024-12-13 23:50:53.146993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:22.626 [2024-12-13 23:50:53.147000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.626 [2024-12-13 23:50:53.147006] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.626 [2024-12-13 23:50:53.147014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.626 [2024-12-13 23:50:53.147021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.626 [2024-12-13 23:50:53.147028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.626 [2024-12-13 23:50:53.147034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.626 [2024-12-13 23:50:53.147041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.626 [2024-12-13 23:50:53.147047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.626 [2024-12-13 23:50:53.147054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.626 [2024-12-13 23:50:53.147059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.626 [2024-12-13 23:50:53.147066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.626 [2024-12-13 23:50:53.147071] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.626 [2024-12-13 23:50:53.147080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.626 [2024-12-13 23:50:53.147087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.626 [2024-12-13 23:50:53.147093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:22.626 [2024-12-13 23:50:53.147101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.626 [2024-12-13 23:50:53.147110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:22.626 [2024-12-13 23:50:53.147115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:22.626 [2024-12-13 23:50:53.147122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:22.626 [2024-12-13 23:50:53.147127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:22.626 [2024-12-13 23:50:53.147134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:22.626 [2024-12-13 23:50:53.147140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:22.626 [2024-12-13 23:50:53.147146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:22.626 [2024-12-13 23:50:53.147151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:22.626 [2024-12-13 23:50:53.147158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:22.626 [2024-12-13 23:50:53.147164] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:22.626 [2024-12-13 23:50:53.147170] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.626 [2024-12-13 23:50:53.147176] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.626 [2024-12-13 23:50:53.147184] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.626 [2024-12-13 23:50:53.147189] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.626 [2024-12-13 23:50:53.147196] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.626 [2024-12-13 23:50:53.147201] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.626 [2024-12-13 23:50:53.147210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.147215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.626 [2024-12-13 23:50:53.147223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:17:22.626 [2024-12-13 23:50:53.147229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.161202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.161231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.626 [2024-12-13 23:50:53.161243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.923 ms 00:17:22.626 [2024-12-13 23:50:53.161252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.161346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.161354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.626 [2024-12-13 23:50:53.161363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:22.626 [2024-12-13 23:50:53.161370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.188179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.188205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.626 [2024-12-13 23:50:53.188215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.790 ms 00:17:22.626 [2024-12-13 23:50:53.188222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.188269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.188278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.626 [2024-12-13 23:50:53.188287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:22.626 [2024-12-13 23:50:53.188294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.188698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.188740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.626 [2024-12-13 23:50:53.188751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:17:22.626 [2024-12-13 23:50:53.188758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.188860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.188872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.626 [2024-12-13 23:50:53.188883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:22.626 [2024-12-13 23:50:53.188889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.202714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.202739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.626 [2024-12-13 23:50:53.202750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.807 ms 00:17:22.626 [2024-12-13 23:50:53.202756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.626 [2024-12-13 23:50:53.213673] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:22.626 [2024-12-13 23:50:53.213699] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.626 [2024-12-13 23:50:53.213710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.626 [2024-12-13 23:50:53.213717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.626 [2024-12-13 23:50:53.213725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.871 ms 00:17:22.626 [2024-12-13 23:50:53.213730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.232626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.232663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.627 [2024-12-13 23:50:53.232674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:17:22.627 [2024-12-13 23:50:53.232680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.242008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.242037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.627 [2024-12-13 23:50:53.242047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.271 ms 00:17:22.627 [2024-12-13 23:50:53.242053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.251222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.251247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.627 [2024-12-13 23:50:53.251258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.125 ms 00:17:22.627 [2024-12-13 23:50:53.251264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.251552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.251563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.627 [2024-12-13 23:50:53.251573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:17:22.627 [2024-12-13 23:50:53.251579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.301061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.301093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.627 [2024-12-13 23:50:53.301107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.461 ms 00:17:22.627 [2024-12-13 23:50:53.301113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.309450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.627 [2024-12-13 23:50:53.323964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.323996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.627 [2024-12-13 23:50:53.324005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.786 ms 00:17:22.627 [2024-12-13 23:50:53.324014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.324073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.324084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.627 [2024-12-13 23:50:53.324091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.627 [2024-12-13 23:50:53.324100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.324144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.324154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.627 [2024-12-13 23:50:53.324160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:22.627 [2024-12-13 23:50:53.324168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.325201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.325228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:22.627 [2024-12-13 23:50:53.325236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:17:22.627 [2024-12-13 23:50:53.325243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.325270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.325280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.627 [2024-12-13 23:50:53.325286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.627 [2024-12-13 23:50:53.325294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.325323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.627 [2024-12-13 23:50:53.325334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.325340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.627 [2024-12-13 23:50:53.325348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:22.627 [2024-12-13 23:50:53.325354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.344468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.344501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.627 [2024-12-13 23:50:53.344512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.093 ms 00:17:22.627 [2024-12-13 23:50:53.344518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.344593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.627 [2024-12-13 23:50:53.344601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.627 [2024-12-13 23:50:53.344610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:22.627 [2024-12-13 23:50:53.344618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.627 [2024-12-13 23:50:53.345446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.627 [2024-12-13 23:50:53.347950] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.594 ms, result 0 00:17:22.627 [2024-12-13 23:50:53.349907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.889 Some configs were skipped because the RPC state that can call them passed over. 00:17:22.889 23:50:53 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:22.889 [2024-12-13 23:50:53.579417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.889 [2024-12-13 23:50:53.579452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:22.889 [2024-12-13 23:50:53.579461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.845 ms 00:17:22.889 [2024-12-13 23:50:53.579469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.889 [2024-12-13 23:50:53.579510] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.937 ms, result 0 00:17:22.889 true 00:17:22.889 23:50:53 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:23.150 [2024-12-13 23:50:53.786715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-12-13 23:50:53.786745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:23.150 [2024-12-13 23:50:53.786755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.004 ms 00:17:23.150 [2024-12-13 23:50:53.786761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-12-13 23:50:53.786790] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.081 ms, result 0 00:17:23.150 true 00:17:23.150 23:50:53 -- ftl/trim.sh@102 -- # killprocess 72775 00:17:23.150 23:50:53 -- common/autotest_common.sh@936 -- # '[' -z 72775 ']' 00:17:23.150 23:50:53 -- common/autotest_common.sh@940 -- # kill -0 72775 00:17:23.150 23:50:53 -- common/autotest_common.sh@941 -- # uname 00:17:23.150 23:50:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.150 23:50:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72775 00:17:23.150 23:50:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:23.150 killing process with pid 72775 00:17:23.150 23:50:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:23.150 23:50:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72775' 00:17:23.150 23:50:53 -- common/autotest_common.sh@955 -- # kill 72775 00:17:23.150 23:50:53 -- common/autotest_common.sh@960 -- # wait 72775 00:17:23.721 [2024-12-13 23:50:54.394806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.394857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.721 [2024-12-13 23:50:54.394869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.721 [2024-12-13 23:50:54.394878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.394897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.721 [2024-12-13 23:50:54.397059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.397087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.721 [2024-12-13 23:50:54.397098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:17:23.721 [2024-12-13 23:50:54.397104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.397329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.397338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.721 [2024-12-13 23:50:54.397347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:23.721 [2024-12-13 23:50:54.397354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.400866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.400894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.721 [2024-12-13 23:50:54.400903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:17:23.721 [2024-12-13 23:50:54.400909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.406287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.406321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:23.721 [2024-12-13 23:50:54.406331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:17:23.721 [2024-12-13 23:50:54.406339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.414665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.414691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.721 [2024-12-13 23:50:54.414702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.278 ms 00:17:23.721 [2024-12-13 23:50:54.414709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.421922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.421950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.721 [2024-12-13 23:50:54.421960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.182 ms 00:17:23.721 [2024-12-13 23:50:54.421966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.422073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.422080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.721 [2024-12-13 23:50:54.422089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:23.721 [2024-12-13 23:50:54.422095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.430801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.430825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:23.721 [2024-12-13 23:50:54.430834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.689 ms 00:17:23.721 [2024-12-13 23:50:54.430840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.439027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.439051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:23.721 [2024-12-13 23:50:54.439063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.156 ms 00:17:23.721 [2024-12-13 23:50:54.439069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.721 [2024-12-13 23:50:54.446732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.721 [2024-12-13 23:50:54.446756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.721 [2024-12-13 23:50:54.446765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.632 ms 00:17:23.721 [2024-12-13 23:50:54.446770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.983 [2024-12-13 23:50:54.454548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.983 [2024-12-13 23:50:54.454573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.983 [2024-12-13 23:50:54.454581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.717 ms 00:17:23.983 [2024-12-13 23:50:54.454586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.983 [2024-12-13 23:50:54.454614] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.983 [2024-12-13 23:50:54.454626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.983 [2024-12-13 23:50:54.454744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.454994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.984 [2024-12-13 23:50:54.455309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.984 [2024-12-13 23:50:54.455319] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:17:23.984 [2024-12-13 23:50:54.455326] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.984 [2024-12-13 23:50:54.455333] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.984 [2024-12-13 23:50:54.455339] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.984 [2024-12-13 23:50:54.455347] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.984 [2024-12-13 23:50:54.455352] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.984 [2024-12-13 23:50:54.455360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.984 [2024-12-13 23:50:54.455365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.984 [2024-12-13 23:50:54.455372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.984 [2024-12-13 23:50:54.455377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.985 [2024-12-13 23:50:54.455387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.985 [2024-12-13 23:50:54.455393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.985 [2024-12-13 23:50:54.455401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:17:23.985 [2024-12-13 23:50:54.455407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.465690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.985 [2024-12-13 23:50:54.465716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.985 [2024-12-13 23:50:54.465728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.266 ms 00:17:23.985 [2024-12-13 23:50:54.465734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.465907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.985 [2024-12-13 23:50:54.465915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.985 [2024-12-13 23:50:54.465924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:23.985 [2024-12-13 23:50:54.465929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.503076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.503103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.985 [2024-12-13 23:50:54.503113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.503120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.503191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.503198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.985 [2024-12-13 23:50:54.503207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.503213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.503250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.503258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.985 [2024-12-13 23:50:54.503269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.503276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.503292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.503298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.985 [2024-12-13 23:50:54.503307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.503314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.567320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.567356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.985 [2024-12-13 23:50:54.567367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.567375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.985 [2024-12-13 23:50:54.591379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.985 [2024-12-13 23:50:54.591450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.985 [2024-12-13 23:50:54.591513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.985 [2024-12-13 23:50:54.591619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.985 [2024-12-13 23:50:54.591675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.985 [2024-12-13 23:50:54.591735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.985 [2024-12-13 23:50:54.591791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.985 [2024-12-13 23:50:54.591798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.985 [2024-12-13 23:50:54.591804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.985 [2024-12-13 23:50:54.591924] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 197.097 ms, result 0 00:17:24.558 23:50:55 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.818 [2024-12-13 23:50:55.343230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.818 [2024-12-13 23:50:55.343351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72835 ] 00:17:24.818 [2024-12-13 23:50:55.493684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.079 [2024-12-13 23:50:55.682601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.339 [2024-12-13 23:50:55.909147] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.339 [2024-12-13 23:50:55.909202] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.339 [2024-12-13 23:50:56.059712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.339 [2024-12-13 23:50:56.059751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.339 [2024-12-13 23:50:56.059763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.339 [2024-12-13 23:50:56.059769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.339 [2024-12-13 23:50:56.061935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.339 [2024-12-13 23:50:56.061966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.339 [2024-12-13 23:50:56.061974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:17:25.339 [2024-12-13 23:50:56.061980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.339 [2024-12-13 23:50:56.062039] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.339 [2024-12-13 23:50:56.062623] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.339 [2024-12-13 23:50:56.062643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.339 [2024-12-13 23:50:56.062650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.339 [2024-12-13 23:50:56.062657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:17:25.339 [2024-12-13 23:50:56.062663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.339 [2024-12-13 23:50:56.063982] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.602 [2024-12-13 23:50:56.074311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.074338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.602 [2024-12-13 23:50:56.074347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.330 ms 00:17:25.602 [2024-12-13 23:50:56.074353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.074426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.074435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.602 [2024-12-13 23:50:56.074441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:25.602 [2024-12-13 23:50:56.074447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.080693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.080716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.602 [2024-12-13 23:50:56.080723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:17:25.602 [2024-12-13 23:50:56.080733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.080813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.080820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.602 [2024-12-13 23:50:56.080827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:25.602 [2024-12-13 23:50:56.080833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.080851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.080858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.602 [2024-12-13 23:50:56.080864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:25.602 [2024-12-13 23:50:56.080869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.080895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:25.602 [2024-12-13 23:50:56.084047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.084069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.602 [2024-12-13 23:50:56.084077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:17:25.602 [2024-12-13 23:50:56.084085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.084117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.084123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.602 [2024-12-13 23:50:56.084130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:25.602 [2024-12-13 23:50:56.084135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.084150] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.602 [2024-12-13 23:50:56.084167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:25.602 [2024-12-13 23:50:56.084194] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.602 [2024-12-13 23:50:56.084208] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:25.602 [2024-12-13 23:50:56.084269] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:25.602 [2024-12-13 23:50:56.084278] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.602 [2024-12-13 23:50:56.084286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:25.602 [2024-12-13 23:50:56.084294] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.602 [2024-12-13 23:50:56.084301] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.602 [2024-12-13 23:50:56.084307] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:25.602 [2024-12-13 23:50:56.084313] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.602 [2024-12-13 23:50:56.084320] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:25.602 [2024-12-13 23:50:56.084329] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:25.602 [2024-12-13 23:50:56.084335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.084340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.602 [2024-12-13 23:50:56.084346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:17:25.602 [2024-12-13 23:50:56.084352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.084402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.602 [2024-12-13 23:50:56.084408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.602 [2024-12-13 23:50:56.084414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:25.602 [2024-12-13 23:50:56.084419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.602 [2024-12-13 23:50:56.084494] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.602 [2024-12-13 23:50:56.084504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.602 [2024-12-13 23:50:56.084511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.602 [2024-12-13 23:50:56.084517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.603 [2024-12-13 23:50:56.084529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084540] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.603 [2024-12-13 23:50:56.084545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.603 [2024-12-13 23:50:56.084556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.603 [2024-12-13 23:50:56.084562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:25.603 [2024-12-13 23:50:56.084567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.603 [2024-12-13 23:50:56.084572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.603 [2024-12-13 23:50:56.084583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:25.603 [2024-12-13 23:50:56.084589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.603 [2024-12-13 23:50:56.084599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:25.603 [2024-12-13 23:50:56.084604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:25.603 [2024-12-13 23:50:56.084614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:25.603 [2024-12-13 23:50:56.084619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.603 [2024-12-13 23:50:56.084632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.603 [2024-12-13 23:50:56.084647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.603 [2024-12-13 23:50:56.084663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.603 [2024-12-13 23:50:56.084678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.603 [2024-12-13 23:50:56.084694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.603 [2024-12-13 23:50:56.084704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.603 [2024-12-13 23:50:56.084709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:25.603 [2024-12-13 23:50:56.084714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.603 [2024-12-13 23:50:56.084718] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.603 [2024-12-13 23:50:56.084725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.603 [2024-12-13 23:50:56.084732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.603 [2024-12-13 23:50:56.084746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.603 [2024-12-13 23:50:56.084751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.603 [2024-12-13 23:50:56.084756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.603 [2024-12-13 23:50:56.084761] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.603 [2024-12-13 23:50:56.084766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.603 [2024-12-13 23:50:56.084772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.603 [2024-12-13 23:50:56.084778] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.603 [2024-12-13 23:50:56.084785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.603 [2024-12-13 23:50:56.084791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:25.603 [2024-12-13 23:50:56.084797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:25.603 [2024-12-13 23:50:56.084802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:25.603 [2024-12-13 23:50:56.084807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:25.603 [2024-12-13 23:50:56.084812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:25.603 [2024-12-13 23:50:56.084819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:25.603 [2024-12-13 23:50:56.084824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:25.603 [2024-12-13 23:50:56.084829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:25.603 [2024-12-13 23:50:56.084835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:25.603 [2024-12-13 23:50:56.084840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:25.603 [2024-12-13 23:50:56.084845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:25.603 [2024-12-13 23:50:56.084850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:25.603 [2024-12-13 23:50:56.084856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:25.603 [2024-12-13 23:50:56.084863] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.603 [2024-12-13 23:50:56.084872] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.603 [2024-12-13 23:50:56.084878] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.603 [2024-12-13 23:50:56.084883] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.603 [2024-12-13 23:50:56.084889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.603 [2024-12-13 23:50:56.084894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.603 [2024-12-13 23:50:56.084908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.084915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.603 [2024-12-13 23:50:56.084920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:17:25.603 [2024-12-13 23:50:56.084929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.098850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.098878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.603 [2024-12-13 23:50:56.098887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.877 ms 00:17:25.603 [2024-12-13 23:50:56.098895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.098989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.098998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.603 [2024-12-13 23:50:56.099005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:25.603 [2024-12-13 23:50:56.099012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.141297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.141328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.603 [2024-12-13 23:50:56.141338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.266 ms 00:17:25.603 [2024-12-13 23:50:56.141346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.141406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.141414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.603 [2024-12-13 23:50:56.141424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.603 [2024-12-13 23:50:56.141430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.141826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.141847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.603 [2024-12-13 23:50:56.141854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:17:25.603 [2024-12-13 23:50:56.141860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.141967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.141981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.603 [2024-12-13 23:50:56.141988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:25.603 [2024-12-13 23:50:56.141995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.155068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.603 [2024-12-13 23:50:56.155094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.603 [2024-12-13 23:50:56.155102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.054 ms 00:17:25.603 [2024-12-13 23:50:56.155110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.603 [2024-12-13 23:50:56.166035] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:25.603 [2024-12-13 23:50:56.166063] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.604 [2024-12-13 23:50:56.166072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.166079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.604 [2024-12-13 23:50:56.166087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.883 ms 00:17:25.604 [2024-12-13 23:50:56.166092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.185203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.185236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:25.604 [2024-12-13 23:50:56.185246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.054 ms 00:17:25.604 [2024-12-13 23:50:56.185252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.194926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.194953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:25.604 [2024-12-13 23:50:56.194967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.617 ms 00:17:25.604 [2024-12-13 23:50:56.194973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.204254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.204279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:25.604 [2024-12-13 23:50:56.204286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.239 ms 00:17:25.604 [2024-12-13 23:50:56.204292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.204579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.204590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.604 [2024-12-13 23:50:56.204597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:17:25.604 [2024-12-13 23:50:56.204606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.254587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.254620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:25.604 [2024-12-13 23:50:56.254630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.962 ms 00:17:25.604 [2024-12-13 23:50:56.254640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.262703] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:25.604 [2024-12-13 23:50:56.277823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.277852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.604 [2024-12-13 23:50:56.277863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.106 ms 00:17:25.604 [2024-12-13 23:50:56.277869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.277930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.277939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:25.604 [2024-12-13 23:50:56.277948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:25.604 [2024-12-13 23:50:56.277955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.277998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.278005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.604 [2024-12-13 23:50:56.278011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:25.604 [2024-12-13 23:50:56.278018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.279062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.279088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:25.604 [2024-12-13 23:50:56.279095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:17:25.604 [2024-12-13 23:50:56.279101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.279129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.279139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.604 [2024-12-13 23:50:56.279146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.604 [2024-12-13 23:50:56.279152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.279182] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:25.604 [2024-12-13 23:50:56.279190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.279196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:25.604 [2024-12-13 23:50:56.279202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:25.604 [2024-12-13 23:50:56.279207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.298407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.298434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.604 [2024-12-13 23:50:56.298444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.180 ms 00:17:25.604 [2024-12-13 23:50:56.298450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.298530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.604 [2024-12-13 23:50:56.298539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.604 [2024-12-13 23:50:56.298546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:25.604 [2024-12-13 23:50:56.298553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.604 [2024-12-13 23:50:56.299324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.604 [2024-12-13 23:50:56.301801] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 239.359 ms, result 0 00:17:25.604 [2024-12-13 23:50:56.302827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.604 [2024-12-13 23:50:56.313982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.048  [2024-12-13T23:50:58.723Z] Copying: 15/256 [MB] (15 MBps) [2024-12-13T23:50:59.668Z] Copying: 26/256 [MB] (11 MBps) [2024-12-13T23:51:00.611Z] Copying: 38/256 [MB] (11 MBps) [2024-12-13T23:51:01.555Z] Copying: 49/256 [MB] (11 MBps) [2024-12-13T23:51:02.499Z] Copying: 60/256 [MB] (11 MBps) [2024-12-13T23:51:03.443Z] Copying: 71/256 [MB] (11 MBps) [2024-12-13T23:51:04.388Z] Copying: 82/256 [MB] (11 MBps) [2024-12-13T23:51:05.773Z] Copying: 94/256 [MB] (11 MBps) [2024-12-13T23:51:06.716Z] Copying: 105/256 [MB] (11 MBps) [2024-12-13T23:51:07.662Z] Copying: 117/256 [MB] (11 MBps) [2024-12-13T23:51:08.607Z] Copying: 128/256 [MB] (11 MBps) [2024-12-13T23:51:09.551Z] Copying: 140/256 [MB] (11 MBps) [2024-12-13T23:51:10.496Z] Copying: 151/256 [MB] (11 MBps) [2024-12-13T23:51:11.440Z] Copying: 162/256 [MB] (10 MBps) [2024-12-13T23:51:12.383Z] Copying: 173/256 [MB] (10 MBps) [2024-12-13T23:51:13.767Z] Copying: 185/256 [MB] (12 MBps) [2024-12-13T23:51:14.712Z] Copying: 196/256 [MB] (11 MBps) [2024-12-13T23:51:15.654Z] Copying: 208/256 [MB] (12 MBps) [2024-12-13T23:51:16.596Z] Copying: 223/256 [MB] (14 MBps) [2024-12-13T23:51:17.542Z] Copying: 234/256 [MB] (11 MBps) [2024-12-13T23:51:18.487Z] Copying: 245/256 [MB] (10 MBps) [2024-12-13T23:51:18.487Z] Copying: 255/256 [MB] (10 MBps) [2024-12-13T23:51:19.059Z] Copying: 256/256 [MB] (average 11 MBps)[2024-12-13 23:51:18.813044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.328 [2024-12-13 23:51:18.825009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.825056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.328 [2024-12-13 23:51:18.825071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.328 [2024-12-13 23:51:18.825081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.825108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.328 [2024-12-13 23:51:18.828349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.828376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.328 [2024-12-13 23:51:18.828386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:17:48.328 [2024-12-13 23:51:18.828393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.828678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.828693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.328 [2024-12-13 23:51:18.828701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:48.328 [2024-12-13 23:51:18.828712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.832403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.832422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.328 [2024-12-13 23:51:18.832431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:17:48.328 [2024-12-13 23:51:18.832440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.840051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.840079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:48.328 [2024-12-13 23:51:18.840088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.582 ms 00:17:48.328 [2024-12-13 23:51:18.840096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.864628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.864661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.328 [2024-12-13 23:51:18.864672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.470 ms 00:17:48.328 [2024-12-13 23:51:18.864679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.880775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.880806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.328 [2024-12-13 23:51:18.880818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:17:48.328 [2024-12-13 23:51:18.880825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.880973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.880984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.328 [2024-12-13 23:51:18.880993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:48.328 [2024-12-13 23:51:18.881000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.904714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.904745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:48.328 [2024-12-13 23:51:18.904754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.697 ms 00:17:48.328 [2024-12-13 23:51:18.904761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.930386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.930419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:48.328 [2024-12-13 23:51:18.930429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.582 ms 00:17:48.328 [2024-12-13 23:51:18.930436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.954278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.954310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.328 [2024-12-13 23:51:18.954320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.774 ms 00:17:48.328 [2024-12-13 23:51:18.954327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.977900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.328 [2024-12-13 23:51:18.977934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.328 [2024-12-13 23:51:18.977945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.497 ms 00:17:48.328 [2024-12-13 23:51:18.977952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.328 [2024-12-13 23:51:18.978002] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.328 [2024-12-13 23:51:18.978017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.328 [2024-12-13 23:51:18.978380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.329 [2024-12-13 23:51:18.978828] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.329 [2024-12-13 23:51:18.978836] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3f1ad20-b815-42d4-97fb-b3e13195de9e 00:17:48.329 [2024-12-13 23:51:18.978844] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.329 [2024-12-13 23:51:18.978852] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.329 [2024-12-13 23:51:18.978859] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.329 [2024-12-13 23:51:18.978867] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.329 [2024-12-13 23:51:18.978874] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.329 [2024-12-13 23:51:18.978885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.329 [2024-12-13 23:51:18.978893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.329 [2024-12-13 23:51:18.978899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.329 [2024-12-13 23:51:18.978905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.329 [2024-12-13 23:51:18.978912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.329 [2024-12-13 23:51:18.978920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.329 [2024-12-13 23:51:18.978929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:17:48.329 [2024-12-13 23:51:18.978936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.329 [2024-12-13 23:51:18.991934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.329 [2024-12-13 23:51:18.991968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.329 [2024-12-13 23:51:18.991984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.966 ms 00:17:48.329 [2024-12-13 23:51:18.991991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.329 [2024-12-13 23:51:18.992217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.329 [2024-12-13 23:51:18.992226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.329 [2024-12-13 23:51:18.992234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:17:48.329 [2024-12-13 23:51:18.992241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.329 [2024-12-13 23:51:19.029974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.329 [2024-12-13 23:51:19.030112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.329 [2024-12-13 23:51:19.030134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.329 [2024-12-13 23:51:19.030142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.329 [2024-12-13 23:51:19.030222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.329 [2024-12-13 23:51:19.030231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.329 [2024-12-13 23:51:19.030239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.329 [2024-12-13 23:51:19.030246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.329 [2024-12-13 23:51:19.030284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.329 [2024-12-13 23:51:19.030293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.329 [2024-12-13 23:51:19.030300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.329 [2024-12-13 23:51:19.030310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.329 [2024-12-13 23:51:19.030328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.329 [2024-12-13 23:51:19.030336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.329 [2024-12-13 23:51:19.030343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.329 [2024-12-13 23:51:19.030350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.105081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.105247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.591 [2024-12-13 23:51:19.105269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.105277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.134708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.134745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.591 [2024-12-13 23:51:19.134754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.134762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.134813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.134823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.591 [2024-12-13 23:51:19.134831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.134838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.134875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.134883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.591 [2024-12-13 23:51:19.134891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.134899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.134991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.135001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.591 [2024-12-13 23:51:19.135009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.135016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.135049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.135059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.591 [2024-12-13 23:51:19.135067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.135075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.135113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.135122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.591 [2024-12-13 23:51:19.135130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.135138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.135185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-12-13 23:51:19.135199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.591 [2024-12-13 23:51:19.135207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-12-13 23:51:19.135214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-12-13 23:51:19.135360] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.358 ms, result 0 00:17:49.557 00:17:49.557 00:17:49.557 23:51:20 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:50.177 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:50.177 23:51:20 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:50.177 23:51:20 -- ftl/trim.sh@109 -- # fio_kill 00:17:50.177 23:51:20 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:50.177 23:51:20 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.177 23:51:20 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:50.177 23:51:20 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:50.177 Process with pid 72775 is not found 00:17:50.177 23:51:20 -- ftl/trim.sh@20 -- # killprocess 72775 00:17:50.177 23:51:20 -- common/autotest_common.sh@936 -- # '[' -z 72775 ']' 00:17:50.177 23:51:20 -- common/autotest_common.sh@940 -- # kill -0 72775 00:17:50.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72775) - No such process 00:17:50.177 23:51:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72775 is not found' 00:17:50.177 ************************************ 00:17:50.177 END TEST ftl_trim 00:17:50.177 ************************************ 00:17:50.177 00:17:50.177 real 1m32.676s 00:17:50.177 user 1m54.545s 00:17:50.177 sys 0m5.271s 00:17:50.177 23:51:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.177 23:51:20 -- common/autotest_common.sh@10 -- # set +x 00:17:50.177 23:51:20 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:50.177 23:51:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:50.177 23:51:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.177 23:51:20 -- common/autotest_common.sh@10 -- # set +x 00:17:50.177 ************************************ 00:17:50.177 START TEST ftl_restore 00:17:50.177 ************************************ 00:17:50.177 23:51:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:50.177 * Looking for test storage... 00:17:50.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.177 23:51:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.177 23:51:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.177 23:51:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.177 23:51:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.177 23:51:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.177 23:51:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.177 23:51:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.177 23:51:20 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.177 23:51:20 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.177 23:51:20 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.177 23:51:20 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.177 23:51:20 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.177 23:51:20 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.177 23:51:20 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.177 23:51:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.177 23:51:20 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.177 23:51:20 -- scripts/common.sh@344 -- # : 1 00:17:50.177 23:51:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.177 23:51:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.177 23:51:20 -- scripts/common.sh@364 -- # decimal 1 00:17:50.177 23:51:20 -- scripts/common.sh@352 -- # local d=1 00:17:50.178 23:51:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.178 23:51:20 -- scripts/common.sh@354 -- # echo 1 00:17:50.178 23:51:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.178 23:51:20 -- scripts/common.sh@365 -- # decimal 2 00:17:50.178 23:51:20 -- scripts/common.sh@352 -- # local d=2 00:17:50.178 23:51:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.178 23:51:20 -- scripts/common.sh@354 -- # echo 2 00:17:50.178 23:51:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.178 23:51:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.178 23:51:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.178 23:51:20 -- scripts/common.sh@367 -- # return 0 00:17:50.178 23:51:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.178 23:51:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.178 --rc genhtml_branch_coverage=1 00:17:50.178 --rc genhtml_function_coverage=1 00:17:50.178 --rc genhtml_legend=1 00:17:50.178 --rc geninfo_all_blocks=1 00:17:50.178 --rc geninfo_unexecuted_blocks=1 00:17:50.178 00:17:50.178 ' 00:17:50.178 23:51:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.178 --rc genhtml_branch_coverage=1 00:17:50.178 --rc genhtml_function_coverage=1 00:17:50.178 --rc genhtml_legend=1 00:17:50.178 --rc geninfo_all_blocks=1 00:17:50.178 --rc geninfo_unexecuted_blocks=1 00:17:50.178 00:17:50.178 ' 00:17:50.178 23:51:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.178 --rc genhtml_branch_coverage=1 00:17:50.178 --rc genhtml_function_coverage=1 00:17:50.178 --rc genhtml_legend=1 00:17:50.178 --rc geninfo_all_blocks=1 00:17:50.178 --rc geninfo_unexecuted_blocks=1 00:17:50.178 00:17:50.178 ' 00:17:50.178 23:51:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.178 --rc genhtml_branch_coverage=1 00:17:50.178 --rc genhtml_function_coverage=1 00:17:50.178 --rc genhtml_legend=1 00:17:50.178 --rc geninfo_all_blocks=1 00:17:50.178 --rc geninfo_unexecuted_blocks=1 00:17:50.178 00:17:50.178 ' 00:17:50.178 23:51:20 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:50.178 23:51:20 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:50.178 23:51:20 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.178 23:51:20 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.178 23:51:20 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:50.439 23:51:20 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:50.439 23:51:20 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.439 23:51:20 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:50.439 23:51:20 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:50.439 23:51:20 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.439 23:51:20 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.439 23:51:20 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:50.439 23:51:20 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:50.439 23:51:20 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.439 23:51:20 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.439 23:51:20 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:50.439 23:51:20 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:50.439 23:51:20 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.439 23:51:20 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.439 23:51:20 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:50.439 23:51:20 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:50.439 23:51:20 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.439 23:51:20 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.439 23:51:20 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.439 23:51:20 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.439 23:51:20 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:50.439 23:51:20 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:50.439 23:51:20 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.439 23:51:20 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.439 23:51:20 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.439 23:51:20 -- ftl/restore.sh@13 -- # mktemp -d 00:17:50.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.439 23:51:20 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.1eqYcmnHqp 00:17:50.439 23:51:20 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:50.439 23:51:20 -- ftl/restore.sh@16 -- # case $opt in 00:17:50.439 23:51:20 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:50.439 23:51:20 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:50.439 23:51:20 -- ftl/restore.sh@23 -- # shift 2 00:17:50.439 23:51:20 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:50.439 23:51:20 -- ftl/restore.sh@25 -- # timeout=240 00:17:50.439 23:51:20 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:50.439 23:51:20 -- ftl/restore.sh@39 -- # svcpid=73172 00:17:50.439 23:51:20 -- ftl/restore.sh@41 -- # waitforlisten 73172 00:17:50.439 23:51:20 -- common/autotest_common.sh@829 -- # '[' -z 73172 ']' 00:17:50.439 23:51:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.439 23:51:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.439 23:51:20 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.439 23:51:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.439 23:51:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.439 23:51:20 -- common/autotest_common.sh@10 -- # set +x 00:17:50.439 [2024-12-13 23:51:21.000308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:50.439 [2024-12-13 23:51:21.000610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73172 ] 00:17:50.439 [2024-12-13 23:51:21.149627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.700 [2024-12-13 23:51:21.329851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:50.700 [2024-12-13 23:51:21.330063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.085 23:51:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.085 23:51:22 -- common/autotest_common.sh@862 -- # return 0 00:17:52.085 23:51:22 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:52.085 23:51:22 -- ftl/common.sh@54 -- # local name=nvme0 00:17:52.085 23:51:22 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:52.085 23:51:22 -- ftl/common.sh@56 -- # local size=103424 00:17:52.085 23:51:22 -- ftl/common.sh@59 -- # local base_bdev 00:17:52.085 23:51:22 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:52.085 23:51:22 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:52.085 23:51:22 -- ftl/common.sh@62 -- # local base_size 00:17:52.085 23:51:22 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:52.085 23:51:22 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:52.085 23:51:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:52.085 23:51:22 -- common/autotest_common.sh@1369 -- # local bs 00:17:52.085 23:51:22 -- common/autotest_common.sh@1370 -- # local nb 00:17:52.085 23:51:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:52.347 23:51:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:52.347 { 00:17:52.347 "name": "nvme0n1", 00:17:52.347 "aliases": [ 00:17:52.347 "b8a3d073-5f37-4c2b-a2a8-eded22aedd2b" 00:17:52.347 ], 00:17:52.347 "product_name": "NVMe disk", 00:17:52.347 "block_size": 4096, 00:17:52.347 "num_blocks": 1310720, 00:17:52.347 "uuid": "b8a3d073-5f37-4c2b-a2a8-eded22aedd2b", 00:17:52.347 "assigned_rate_limits": { 00:17:52.347 "rw_ios_per_sec": 0, 00:17:52.347 "rw_mbytes_per_sec": 0, 00:17:52.347 "r_mbytes_per_sec": 0, 00:17:52.347 "w_mbytes_per_sec": 0 00:17:52.347 }, 00:17:52.347 "claimed": true, 00:17:52.347 "claim_type": "read_many_write_one", 00:17:52.347 "zoned": false, 00:17:52.347 "supported_io_types": { 00:17:52.347 "read": true, 00:17:52.347 "write": true, 00:17:52.347 "unmap": true, 00:17:52.347 "write_zeroes": true, 00:17:52.347 "flush": true, 00:17:52.347 "reset": true, 00:17:52.347 "compare": true, 00:17:52.347 "compare_and_write": false, 00:17:52.347 "abort": true, 00:17:52.347 "nvme_admin": true, 00:17:52.347 "nvme_io": true 00:17:52.347 }, 00:17:52.347 "driver_specific": { 00:17:52.347 "nvme": [ 00:17:52.347 { 00:17:52.347 "pci_address": "0000:00:07.0", 00:17:52.347 "trid": { 00:17:52.347 "trtype": "PCIe", 00:17:52.347 "traddr": "0000:00:07.0" 00:17:52.347 }, 00:17:52.347 "ctrlr_data": { 00:17:52.347 "cntlid": 0, 00:17:52.347 "vendor_id": "0x1b36", 00:17:52.347 "model_number": "QEMU NVMe Ctrl", 00:17:52.347 "serial_number": "12341", 00:17:52.347 "firmware_revision": "8.0.0", 00:17:52.347 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:52.347 "oacs": { 00:17:52.347 "security": 0, 00:17:52.347 "format": 1, 00:17:52.347 "firmware": 0, 00:17:52.347 "ns_manage": 1 00:17:52.347 }, 00:17:52.347 "multi_ctrlr": false, 00:17:52.347 "ana_reporting": false 00:17:52.347 }, 00:17:52.347 "vs": { 00:17:52.347 "nvme_version": "1.4" 00:17:52.347 }, 00:17:52.347 "ns_data": { 00:17:52.347 "id": 1, 00:17:52.347 "can_share": false 00:17:52.347 } 00:17:52.347 } 00:17:52.347 ], 00:17:52.347 "mp_policy": "active_passive" 00:17:52.347 } 00:17:52.347 } 00:17:52.347 ]' 00:17:52.347 23:51:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:52.347 23:51:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:52.347 23:51:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:52.609 23:51:23 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:52.609 23:51:23 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:52.609 23:51:23 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:52.609 23:51:23 -- ftl/common.sh@63 -- # base_size=5120 00:17:52.609 23:51:23 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:52.609 23:51:23 -- ftl/common.sh@67 -- # clear_lvols 00:17:52.609 23:51:23 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.609 23:51:23 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:52.609 23:51:23 -- ftl/common.sh@28 -- # stores=1138766e-0956-4a64-aa01-efb9dc24487c 00:17:52.609 23:51:23 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:52.609 23:51:23 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1138766e-0956-4a64-aa01-efb9dc24487c 00:17:52.870 23:51:23 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:53.131 23:51:23 -- ftl/common.sh@68 -- # lvs=bfa15716-9047-4df5-b9f9-851077ac5080 00:17:53.131 23:51:23 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bfa15716-9047-4df5-b9f9-851077ac5080 00:17:53.393 23:51:23 -- ftl/restore.sh@43 -- # split_bdev=a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.393 23:51:23 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:53.393 23:51:23 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.393 23:51:23 -- ftl/common.sh@35 -- # local name=nvc0 00:17:53.393 23:51:23 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:53.393 23:51:23 -- ftl/common.sh@37 -- # local base_bdev=a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.393 23:51:23 -- ftl/common.sh@38 -- # local cache_size= 00:17:53.393 23:51:23 -- ftl/common.sh@41 -- # get_bdev_size a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.393 23:51:23 -- common/autotest_common.sh@1367 -- # local bdev_name=a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.393 23:51:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:53.393 23:51:23 -- common/autotest_common.sh@1369 -- # local bs 00:17:53.393 23:51:23 -- common/autotest_common.sh@1370 -- # local nb 00:17:53.393 23:51:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.393 23:51:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:53.393 { 00:17:53.393 "name": "a1433d16-e262-4194-8fc9-186e812901e8", 00:17:53.393 "aliases": [ 00:17:53.393 "lvs/nvme0n1p0" 00:17:53.393 ], 00:17:53.393 "product_name": "Logical Volume", 00:17:53.393 "block_size": 4096, 00:17:53.393 "num_blocks": 26476544, 00:17:53.393 "uuid": "a1433d16-e262-4194-8fc9-186e812901e8", 00:17:53.393 "assigned_rate_limits": { 00:17:53.393 "rw_ios_per_sec": 0, 00:17:53.393 "rw_mbytes_per_sec": 0, 00:17:53.393 "r_mbytes_per_sec": 0, 00:17:53.393 "w_mbytes_per_sec": 0 00:17:53.393 }, 00:17:53.393 "claimed": false, 00:17:53.393 "zoned": false, 00:17:53.393 "supported_io_types": { 00:17:53.393 "read": true, 00:17:53.393 "write": true, 00:17:53.393 "unmap": true, 00:17:53.393 "write_zeroes": true, 00:17:53.393 "flush": false, 00:17:53.393 "reset": true, 00:17:53.393 "compare": false, 00:17:53.393 "compare_and_write": false, 00:17:53.393 "abort": false, 00:17:53.393 "nvme_admin": false, 00:17:53.393 "nvme_io": false 00:17:53.393 }, 00:17:53.393 "driver_specific": { 00:17:53.393 "lvol": { 00:17:53.393 "lvol_store_uuid": "bfa15716-9047-4df5-b9f9-851077ac5080", 00:17:53.393 "base_bdev": "nvme0n1", 00:17:53.393 "thin_provision": true, 00:17:53.393 "snapshot": false, 00:17:53.393 "clone": false, 00:17:53.393 "esnap_clone": false 00:17:53.393 } 00:17:53.393 } 00:17:53.393 } 00:17:53.393 ]' 00:17:53.393 23:51:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:53.654 23:51:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:53.654 23:51:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:53.654 23:51:24 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:53.654 23:51:24 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:53.654 23:51:24 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:53.654 23:51:24 -- ftl/common.sh@41 -- # local base_size=5171 00:17:53.654 23:51:24 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:53.654 23:51:24 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:53.915 23:51:24 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:53.915 23:51:24 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:53.915 23:51:24 -- ftl/common.sh@48 -- # get_bdev_size a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.915 23:51:24 -- common/autotest_common.sh@1367 -- # local bdev_name=a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.915 23:51:24 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:53.915 23:51:24 -- common/autotest_common.sh@1369 -- # local bs 00:17:53.915 23:51:24 -- common/autotest_common.sh@1370 -- # local nb 00:17:53.915 23:51:24 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1433d16-e262-4194-8fc9-186e812901e8 00:17:53.915 23:51:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:53.915 { 00:17:53.915 "name": "a1433d16-e262-4194-8fc9-186e812901e8", 00:17:53.915 "aliases": [ 00:17:53.915 "lvs/nvme0n1p0" 00:17:53.915 ], 00:17:53.915 "product_name": "Logical Volume", 00:17:53.915 "block_size": 4096, 00:17:53.915 "num_blocks": 26476544, 00:17:53.915 "uuid": "a1433d16-e262-4194-8fc9-186e812901e8", 00:17:53.915 "assigned_rate_limits": { 00:17:53.915 "rw_ios_per_sec": 0, 00:17:53.915 "rw_mbytes_per_sec": 0, 00:17:53.915 "r_mbytes_per_sec": 0, 00:17:53.915 "w_mbytes_per_sec": 0 00:17:53.915 }, 00:17:53.915 "claimed": false, 00:17:53.915 "zoned": false, 00:17:53.915 "supported_io_types": { 00:17:53.915 "read": true, 00:17:53.915 "write": true, 00:17:53.915 "unmap": true, 00:17:53.915 "write_zeroes": true, 00:17:53.915 "flush": false, 00:17:53.915 "reset": true, 00:17:53.915 "compare": false, 00:17:53.915 "compare_and_write": false, 00:17:53.915 "abort": false, 00:17:53.915 "nvme_admin": false, 00:17:53.915 "nvme_io": false 00:17:53.915 }, 00:17:53.915 "driver_specific": { 00:17:53.915 "lvol": { 00:17:53.915 "lvol_store_uuid": "bfa15716-9047-4df5-b9f9-851077ac5080", 00:17:53.915 "base_bdev": "nvme0n1", 00:17:53.915 "thin_provision": true, 00:17:53.915 "snapshot": false, 00:17:53.915 "clone": false, 00:17:53.915 "esnap_clone": false 00:17:53.915 } 00:17:53.915 } 00:17:53.915 } 00:17:53.915 ]' 00:17:53.915 23:51:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:53.915 23:51:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:53.915 23:51:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:54.177 23:51:24 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:54.177 23:51:24 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:54.177 23:51:24 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:54.177 23:51:24 -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.177 23:51:24 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:54.177 23:51:24 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:54.177 23:51:24 -- ftl/restore.sh@48 -- # get_bdev_size a1433d16-e262-4194-8fc9-186e812901e8 00:17:54.177 23:51:24 -- common/autotest_common.sh@1367 -- # local bdev_name=a1433d16-e262-4194-8fc9-186e812901e8 00:17:54.177 23:51:24 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:54.177 23:51:24 -- common/autotest_common.sh@1369 -- # local bs 00:17:54.177 23:51:24 -- common/autotest_common.sh@1370 -- # local nb 00:17:54.177 23:51:24 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1433d16-e262-4194-8fc9-186e812901e8 00:17:54.438 23:51:25 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:54.438 { 00:17:54.438 "name": "a1433d16-e262-4194-8fc9-186e812901e8", 00:17:54.438 "aliases": [ 00:17:54.438 "lvs/nvme0n1p0" 00:17:54.438 ], 00:17:54.438 "product_name": "Logical Volume", 00:17:54.438 "block_size": 4096, 00:17:54.438 "num_blocks": 26476544, 00:17:54.438 "uuid": "a1433d16-e262-4194-8fc9-186e812901e8", 00:17:54.438 "assigned_rate_limits": { 00:17:54.438 "rw_ios_per_sec": 0, 00:17:54.438 "rw_mbytes_per_sec": 0, 00:17:54.438 "r_mbytes_per_sec": 0, 00:17:54.438 "w_mbytes_per_sec": 0 00:17:54.438 }, 00:17:54.438 "claimed": false, 00:17:54.438 "zoned": false, 00:17:54.438 "supported_io_types": { 00:17:54.438 "read": true, 00:17:54.438 "write": true, 00:17:54.438 "unmap": true, 00:17:54.438 "write_zeroes": true, 00:17:54.438 "flush": false, 00:17:54.438 "reset": true, 00:17:54.438 "compare": false, 00:17:54.438 "compare_and_write": false, 00:17:54.438 "abort": false, 00:17:54.438 "nvme_admin": false, 00:17:54.438 "nvme_io": false 00:17:54.438 }, 00:17:54.438 "driver_specific": { 00:17:54.438 "lvol": { 00:17:54.438 "lvol_store_uuid": "bfa15716-9047-4df5-b9f9-851077ac5080", 00:17:54.438 "base_bdev": "nvme0n1", 00:17:54.438 "thin_provision": true, 00:17:54.438 "snapshot": false, 00:17:54.438 "clone": false, 00:17:54.438 "esnap_clone": false 00:17:54.438 } 00:17:54.438 } 00:17:54.438 } 00:17:54.438 ]' 00:17:54.438 23:51:25 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:54.438 23:51:25 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:54.438 23:51:25 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:54.438 23:51:25 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:54.438 23:51:25 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:54.438 23:51:25 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:54.438 23:51:25 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:54.438 23:51:25 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a1433d16-e262-4194-8fc9-186e812901e8 --l2p_dram_limit 10' 00:17:54.438 23:51:25 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:54.438 23:51:25 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:54.438 23:51:25 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:54.438 23:51:25 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:54.438 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:54.438 23:51:25 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a1433d16-e262-4194-8fc9-186e812901e8 --l2p_dram_limit 10 -c nvc0n1p0 00:17:54.700 [2024-12-13 23:51:25.223846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.700 [2024-12-13 23:51:25.223891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.700 [2024-12-13 23:51:25.223906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.700 [2024-12-13 23:51:25.223915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.700 [2024-12-13 23:51:25.223965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.700 [2024-12-13 23:51:25.223974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.700 [2024-12-13 23:51:25.223984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:54.700 [2024-12-13 23:51:25.223991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.700 [2024-12-13 23:51:25.224011] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.700 [2024-12-13 23:51:25.224724] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.700 [2024-12-13 23:51:25.224744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.700 [2024-12-13 23:51:25.224751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.700 [2024-12-13 23:51:25.224762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:54.700 [2024-12-13 23:51:25.224769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.700 [2024-12-13 23:51:25.224802] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5ea95622-f05f-4568-a570-323b970f1ac5 00:17:54.700 [2024-12-13 23:51:25.225896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.700 [2024-12-13 23:51:25.226026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:54.700 [2024-12-13 23:51:25.226043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:54.700 [2024-12-13 23:51:25.226052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.700 [2024-12-13 23:51:25.231354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.700 [2024-12-13 23:51:25.231388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.700 [2024-12-13 23:51:25.231397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.225 ms 00:17:54.700 [2024-12-13 23:51:25.231406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.231498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.701 [2024-12-13 23:51:25.231509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.701 [2024-12-13 23:51:25.231517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:54.701 [2024-12-13 23:51:25.231529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.231576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.701 [2024-12-13 23:51:25.231589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.701 [2024-12-13 23:51:25.231597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:54.701 [2024-12-13 23:51:25.231605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.231628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.701 [2024-12-13 23:51:25.235285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.701 [2024-12-13 23:51:25.235311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.701 [2024-12-13 23:51:25.235322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.662 ms 00:17:54.701 [2024-12-13 23:51:25.235329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.235362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.701 [2024-12-13 23:51:25.235369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.701 [2024-12-13 23:51:25.235379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.701 [2024-12-13 23:51:25.235386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.235403] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:54.701 [2024-12-13 23:51:25.235531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:54.701 [2024-12-13 23:51:25.235546] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.701 [2024-12-13 23:51:25.235556] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:54.701 [2024-12-13 23:51:25.235568] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.701 [2024-12-13 23:51:25.235576] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.701 [2024-12-13 23:51:25.235588] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.701 [2024-12-13 23:51:25.235602] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.701 [2024-12-13 23:51:25.235610] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:54.701 [2024-12-13 23:51:25.235617] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:54.701 [2024-12-13 23:51:25.235626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.701 [2024-12-13 23:51:25.235633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.701 [2024-12-13 23:51:25.235651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:17:54.701 [2024-12-13 23:51:25.235659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.235721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.701 [2024-12-13 23:51:25.235729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.701 [2024-12-13 23:51:25.235738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:54.701 [2024-12-13 23:51:25.235747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.701 [2024-12-13 23:51:25.235831] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.701 [2024-12-13 23:51:25.235841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.701 [2024-12-13 23:51:25.235851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.701 [2024-12-13 23:51:25.235858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.701 [2024-12-13 23:51:25.235867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.701 [2024-12-13 23:51:25.235873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.701 [2024-12-13 23:51:25.235881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.701 [2024-12-13 23:51:25.235887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.701 [2024-12-13 23:51:25.235895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.701 [2024-12-13 23:51:25.235902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.701 [2024-12-13 23:51:25.235910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.701 [2024-12-13 23:51:25.235916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.701 [2024-12-13 23:51:25.235925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.701 [2024-12-13 23:51:25.235931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.701 [2024-12-13 23:51:25.235942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:54.701 [2024-12-13 23:51:25.235948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.701 [2024-12-13 23:51:25.235958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.701 [2024-12-13 23:51:25.235965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:54.701 [2024-12-13 23:51:25.235973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.701 [2024-12-13 23:51:25.235979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:54.701 [2024-12-13 23:51:25.235987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:54.701 [2024-12-13 23:51:25.235993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.701 [2024-12-13 23:51:25.236007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.701 [2024-12-13 23:51:25.236014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.701 [2024-12-13 23:51:25.236029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:54.701 [2024-12-13 23:51:25.236035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.701 [2024-12-13 23:51:25.236049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.701 [2024-12-13 23:51:25.236056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.701 [2024-12-13 23:51:25.236072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:54.701 [2024-12-13 23:51:25.236078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236086] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.701 [2024-12-13 23:51:25.236092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.701 [2024-12-13 23:51:25.236099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.701 [2024-12-13 23:51:25.236105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.701 [2024-12-13 23:51:25.236114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:54.701 [2024-12-13 23:51:25.236120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.701 [2024-12-13 23:51:25.236128] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.701 [2024-12-13 23:51:25.236135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.701 [2024-12-13 23:51:25.236144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.701 [2024-12-13 23:51:25.236162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.701 [2024-12-13 23:51:25.236168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.701 [2024-12-13 23:51:25.236176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.701 [2024-12-13 23:51:25.236183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.701 [2024-12-13 23:51:25.236193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.701 [2024-12-13 23:51:25.236200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.701 [2024-12-13 23:51:25.236209] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.701 [2024-12-13 23:51:25.236218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.701 [2024-12-13 23:51:25.236228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.701 [2024-12-13 23:51:25.236235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:54.701 [2024-12-13 23:51:25.236244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:54.701 [2024-12-13 23:51:25.236250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:54.701 [2024-12-13 23:51:25.236259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:54.702 [2024-12-13 23:51:25.236265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:54.702 [2024-12-13 23:51:25.236274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:54.702 [2024-12-13 23:51:25.236281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:54.702 [2024-12-13 23:51:25.236289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:54.702 [2024-12-13 23:51:25.236296] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:54.702 [2024-12-13 23:51:25.236304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:54.702 [2024-12-13 23:51:25.236311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:54.702 [2024-12-13 23:51:25.236322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:54.702 [2024-12-13 23:51:25.236329] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.702 [2024-12-13 23:51:25.236448] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.702 [2024-12-13 23:51:25.236456] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.702 [2024-12-13 23:51:25.236465] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.702 [2024-12-13 23:51:25.236472] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.702 [2024-12-13 23:51:25.236491] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.702 [2024-12-13 23:51:25.236499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.236508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.702 [2024-12-13 23:51:25.236516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:17:54.702 [2024-12-13 23:51:25.236525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.251372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.251411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.702 [2024-12-13 23:51:25.251422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.809 ms 00:17:54.702 [2024-12-13 23:51:25.251431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.251530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.251543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.702 [2024-12-13 23:51:25.251553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:54.702 [2024-12-13 23:51:25.251561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.282254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.282288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.702 [2024-12-13 23:51:25.282297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.653 ms 00:17:54.702 [2024-12-13 23:51:25.282305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.282332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.282341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.702 [2024-12-13 23:51:25.282349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.702 [2024-12-13 23:51:25.282359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.282736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.282753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.702 [2024-12-13 23:51:25.282762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:54.702 [2024-12-13 23:51:25.282771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.282882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.282898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.702 [2024-12-13 23:51:25.282906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:54.702 [2024-12-13 23:51:25.282915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.297852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.297881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.702 [2024-12-13 23:51:25.297891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.920 ms 00:17:54.702 [2024-12-13 23:51:25.297899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.309332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:54.702 [2024-12-13 23:51:25.312142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.312172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.702 [2024-12-13 23:51:25.312185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.174 ms 00:17:54.702 [2024-12-13 23:51:25.312194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.398749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 23:51:25.398877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:54.702 [2024-12-13 23:51:25.398936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.526 ms 00:17:54.702 [2024-12-13 23:51:25.398959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 23:51:25.399011] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:54.702 [2024-12-13 23:51:25.399045] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:58.905 [2024-12-13 23:51:29.014577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.905 [2024-12-13 23:51:29.014836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:58.905 [2024-12-13 23:51:29.014870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3615.546 ms 00:17:58.905 [2024-12-13 23:51:29.014881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.905 [2024-12-13 23:51:29.015082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.905 [2024-12-13 23:51:29.015094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.905 [2024-12-13 23:51:29.015110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:17:58.905 [2024-12-13 23:51:29.015118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.905 [2024-12-13 23:51:29.040897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.905 [2024-12-13 23:51:29.041069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:58.905 [2024-12-13 23:51:29.041095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.719 ms 00:17:58.905 [2024-12-13 23:51:29.041104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.905 [2024-12-13 23:51:29.066311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.905 [2024-12-13 23:51:29.066355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:58.905 [2024-12-13 23:51:29.066374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.171 ms 00:17:58.905 [2024-12-13 23:51:29.066382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.905 [2024-12-13 23:51:29.066740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.905 [2024-12-13 23:51:29.066751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.905 [2024-12-13 23:51:29.066762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:17:58.906 [2024-12-13 23:51:29.066770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.135068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.135098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:58.906 [2024-12-13 23:51:29.135112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.254 ms 00:17:58.906 [2024-12-13 23:51:29.135120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.159883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.159916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:58.906 [2024-12-13 23:51:29.159928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:17:58.906 [2024-12-13 23:51:29.159935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.161352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.161383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:58.906 [2024-12-13 23:51:29.161395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:17:58.906 [2024-12-13 23:51:29.161403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.185126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.185156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.906 [2024-12-13 23:51:29.185168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.688 ms 00:17:58.906 [2024-12-13 23:51:29.185174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.185217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.185225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.906 [2024-12-13 23:51:29.185236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:58.906 [2024-12-13 23:51:29.185243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.185319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.185328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.906 [2024-12-13 23:51:29.185338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:58.906 [2024-12-13 23:51:29.185345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.186255] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3961.993 ms, result 0 00:17:58.906 { 00:17:58.906 "name": "ftl0", 00:17:58.906 "uuid": "5ea95622-f05f-4568-a570-323b970f1ac5" 00:17:58.906 } 00:17:58.906 23:51:29 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:58.906 23:51:29 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:58.906 23:51:29 -- ftl/restore.sh@63 -- # echo ']}' 00:17:58.906 23:51:29 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:58.906 [2024-12-13 23:51:29.597847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.598082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.906 [2024-12-13 23:51:29.598106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.906 [2024-12-13 23:51:29.598118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.598151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.906 [2024-12-13 23:51:29.601023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.601063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.906 [2024-12-13 23:51:29.601077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:17:58.906 [2024-12-13 23:51:29.601093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.601367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.601378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.906 [2024-12-13 23:51:29.601389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:58.906 [2024-12-13 23:51:29.601397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.604805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.604900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.906 [2024-12-13 23:51:29.604957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:17:58.906 [2024-12-13 23:51:29.604982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.906 [2024-12-13 23:51:29.611131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.906 [2024-12-13 23:51:29.611264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:58.906 [2024-12-13 23:51:29.611324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.104 ms 00:17:58.906 [2024-12-13 23:51:29.611347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.167 [2024-12-13 23:51:29.637599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.167 [2024-12-13 23:51:29.637754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.167 [2024-12-13 23:51:29.637779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.147 ms 00:17:59.167 [2024-12-13 23:51:29.637787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.167 [2024-12-13 23:51:29.656115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.167 [2024-12-13 23:51:29.656268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.167 [2024-12-13 23:51:29.656294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.283 ms 00:17:59.167 [2024-12-13 23:51:29.656303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.167 [2024-12-13 23:51:29.656467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.167 [2024-12-13 23:51:29.656498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.167 [2024-12-13 23:51:29.656511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:17:59.167 [2024-12-13 23:51:29.656522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.167 [2024-12-13 23:51:29.682294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.167 [2024-12-13 23:51:29.682338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:59.168 [2024-12-13 23:51:29.682352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.744 ms 00:17:59.168 [2024-12-13 23:51:29.682359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.168 [2024-12-13 23:51:29.707539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.168 [2024-12-13 23:51:29.707582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:59.168 [2024-12-13 23:51:29.707596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.128 ms 00:17:59.168 [2024-12-13 23:51:29.707604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.168 [2024-12-13 23:51:29.732579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.168 [2024-12-13 23:51:29.732620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.168 [2024-12-13 23:51:29.732634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:17:59.168 [2024-12-13 23:51:29.732641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.168 [2024-12-13 23:51:29.757461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.168 [2024-12-13 23:51:29.757512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.168 [2024-12-13 23:51:29.757527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.726 ms 00:17:59.168 [2024-12-13 23:51:29.757534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.168 [2024-12-13 23:51:29.757598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.168 [2024-12-13 23:51:29.757618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.757995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.168 [2024-12-13 23:51:29.758300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.169 [2024-12-13 23:51:29.758552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.169 [2024-12-13 23:51:29.758562] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ea95622-f05f-4568-a570-323b970f1ac5 00:17:59.169 [2024-12-13 23:51:29.758570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.169 [2024-12-13 23:51:29.758580] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.169 [2024-12-13 23:51:29.758588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.169 [2024-12-13 23:51:29.758599] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.169 [2024-12-13 23:51:29.758607] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.169 [2024-12-13 23:51:29.758616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.169 [2024-12-13 23:51:29.758624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.169 [2024-12-13 23:51:29.758633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.169 [2024-12-13 23:51:29.758640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.169 [2024-12-13 23:51:29.758652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.169 [2024-12-13 23:51:29.758660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.169 [2024-12-13 23:51:29.758673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:17:59.169 [2024-12-13 23:51:29.758681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.169 [2024-12-13 23:51:29.772317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.169 [2024-12-13 23:51:29.772359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.169 [2024-12-13 23:51:29.772372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.591 ms 00:17:59.169 [2024-12-13 23:51:29.772380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.169 [2024-12-13 23:51:29.772648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.169 [2024-12-13 23:51:29.772662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.169 [2024-12-13 23:51:29.772674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:59.169 [2024-12-13 23:51:29.772681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.169 [2024-12-13 23:51:29.820350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.169 [2024-12-13 23:51:29.820469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.169 [2024-12-13 23:51:29.820501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.169 [2024-12-13 23:51:29.820510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.169 [2024-12-13 23:51:29.820572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.169 [2024-12-13 23:51:29.820583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.169 [2024-12-13 23:51:29.820592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.169 [2024-12-13 23:51:29.820599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.169 [2024-12-13 23:51:29.820666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.169 [2024-12-13 23:51:29.820675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.169 [2024-12-13 23:51:29.820684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.169 [2024-12-13 23:51:29.820691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.169 [2024-12-13 23:51:29.820710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.169 [2024-12-13 23:51:29.820717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.169 [2024-12-13 23:51:29.820728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.169 [2024-12-13 23:51:29.820735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.897393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.897432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.431 [2024-12-13 23:51:29.897446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.897455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.431 [2024-12-13 23:51:29.927281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.431 [2024-12-13 23:51:29.927374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.431 [2024-12-13 23:51:29.927448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.431 [2024-12-13 23:51:29.927592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.431 [2024-12-13 23:51:29.927669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.431 [2024-12-13 23:51:29.927734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.431 [2024-12-13 23:51:29.927797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.431 [2024-12-13 23:51:29.927806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.431 [2024-12-13 23:51:29.927815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-12-13 23:51:29.927942] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.060 ms, result 0 00:17:59.431 true 00:17:59.431 23:51:29 -- ftl/restore.sh@66 -- # killprocess 73172 00:17:59.431 23:51:29 -- common/autotest_common.sh@936 -- # '[' -z 73172 ']' 00:17:59.431 23:51:29 -- common/autotest_common.sh@940 -- # kill -0 73172 00:17:59.431 23:51:29 -- common/autotest_common.sh@941 -- # uname 00:17:59.431 23:51:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.431 23:51:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73172 00:17:59.431 killing process with pid 73172 00:17:59.431 23:51:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:59.431 23:51:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:59.431 23:51:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73172' 00:17:59.431 23:51:29 -- common/autotest_common.sh@955 -- # kill 73172 00:17:59.431 23:51:29 -- common/autotest_common.sh@960 -- # wait 73172 00:18:06.014 23:51:36 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:09.310 262144+0 records in 00:18:09.310 262144+0 records out 00:18:09.310 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.76657 s, 285 MB/s 00:18:09.310 23:51:40 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:11.852 23:51:42 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.852 [2024-12-13 23:51:42.159215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.852 [2024-12-13 23:51:42.159299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73448 ] 00:18:11.852 [2024-12-13 23:51:42.294437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.852 [2024-12-13 23:51:42.511297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.111 [2024-12-13 23:51:42.784711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.111 [2024-12-13 23:51:42.784778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.372 [2024-12-13 23:51:42.937239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.372 [2024-12-13 23:51:42.937283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.372 [2024-12-13 23:51:42.937297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.372 [2024-12-13 23:51:42.937307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.372 [2024-12-13 23:51:42.937354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.372 [2024-12-13 23:51:42.937364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.373 [2024-12-13 23:51:42.937372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:12.373 [2024-12-13 23:51:42.937379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.937396] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.373 [2024-12-13 23:51:42.938180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.373 [2024-12-13 23:51:42.938203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.938211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.373 [2024-12-13 23:51:42.938221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:18:12.373 [2024-12-13 23:51:42.938229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.939672] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.373 [2024-12-13 23:51:42.953077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.953275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.373 [2024-12-13 23:51:42.953294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.406 ms 00:18:12.373 [2024-12-13 23:51:42.953302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.953616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.953646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.373 [2024-12-13 23:51:42.953659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:12.373 [2024-12-13 23:51:42.953667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.960752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.960783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.373 [2024-12-13 23:51:42.960792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:18:12.373 [2024-12-13 23:51:42.960800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.960885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.960896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.373 [2024-12-13 23:51:42.960904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:12.373 [2024-12-13 23:51:42.960911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.960951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.960961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.373 [2024-12-13 23:51:42.960970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:12.373 [2024-12-13 23:51:42.960977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.961006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.373 [2024-12-13 23:51:42.964826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.964853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.373 [2024-12-13 23:51:42.964862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.832 ms 00:18:12.373 [2024-12-13 23:51:42.964869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.964902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.964910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.373 [2024-12-13 23:51:42.964918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:12.373 [2024-12-13 23:51:42.964928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.964969] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.373 [2024-12-13 23:51:42.964990] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:12.373 [2024-12-13 23:51:42.965025] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.373 [2024-12-13 23:51:42.965040] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:12.373 [2024-12-13 23:51:42.965116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:12.373 [2024-12-13 23:51:42.965126] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.373 [2024-12-13 23:51:42.965140] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:12.373 [2024-12-13 23:51:42.965150] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965159] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965167] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:12.373 [2024-12-13 23:51:42.965174] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.373 [2024-12-13 23:51:42.965181] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:12.373 [2024-12-13 23:51:42.965189] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:12.373 [2024-12-13 23:51:42.965196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.965204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.373 [2024-12-13 23:51:42.965212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:18:12.373 [2024-12-13 23:51:42.965219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.965280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.373 [2024-12-13 23:51:42.965287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.373 [2024-12-13 23:51:42.965295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:12.373 [2024-12-13 23:51:42.965303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.373 [2024-12-13 23:51:42.965373] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.373 [2024-12-13 23:51:42.965384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.373 [2024-12-13 23:51:42.965392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.373 [2024-12-13 23:51:42.965413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.373 [2024-12-13 23:51:42.965435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.373 [2024-12-13 23:51:42.965449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.373 [2024-12-13 23:51:42.965457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:12.373 [2024-12-13 23:51:42.965464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.373 [2024-12-13 23:51:42.965471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.373 [2024-12-13 23:51:42.965477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:12.373 [2024-12-13 23:51:42.965500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.373 [2024-12-13 23:51:42.965522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:12.373 [2024-12-13 23:51:42.965529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:12.373 [2024-12-13 23:51:42.965543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:12.373 [2024-12-13 23:51:42.965549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.373 [2024-12-13 23:51:42.965562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.373 [2024-12-13 23:51:42.965584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.373 [2024-12-13 23:51:42.965616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.373 [2024-12-13 23:51:42.965637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.373 [2024-12-13 23:51:42.965657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.373 [2024-12-13 23:51:42.965670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.373 [2024-12-13 23:51:42.965677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:12.373 [2024-12-13 23:51:42.965684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.373 [2024-12-13 23:51:42.965690] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.373 [2024-12-13 23:51:42.965701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.373 [2024-12-13 23:51:42.965708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.373 [2024-12-13 23:51:42.965716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.373 [2024-12-13 23:51:42.965725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.373 [2024-12-13 23:51:42.965733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.373 [2024-12-13 23:51:42.965740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.374 [2024-12-13 23:51:42.965746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.374 [2024-12-13 23:51:42.965753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.374 [2024-12-13 23:51:42.965760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.374 [2024-12-13 23:51:42.965767] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.374 [2024-12-13 23:51:42.965776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.374 [2024-12-13 23:51:42.965786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:12.374 [2024-12-13 23:51:42.965793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:12.374 [2024-12-13 23:51:42.965800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:12.374 [2024-12-13 23:51:42.965807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:12.374 [2024-12-13 23:51:42.965814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:12.374 [2024-12-13 23:51:42.965821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:12.374 [2024-12-13 23:51:42.965827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:12.374 [2024-12-13 23:51:42.965834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:12.374 [2024-12-13 23:51:42.965842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:12.374 [2024-12-13 23:51:42.965848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:12.374 [2024-12-13 23:51:42.965855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:12.374 [2024-12-13 23:51:42.965862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:12.374 [2024-12-13 23:51:42.965869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:12.374 [2024-12-13 23:51:42.965876] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.374 [2024-12-13 23:51:42.965885] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.374 [2024-12-13 23:51:42.965893] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.374 [2024-12-13 23:51:42.965900] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.374 [2024-12-13 23:51:42.965908] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.374 [2024-12-13 23:51:42.965914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.374 [2024-12-13 23:51:42.965922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:42.965931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.374 [2024-12-13 23:51:42.965938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:18:12.374 [2024-12-13 23:51:42.965945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:42.982964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:42.982998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.374 [2024-12-13 23:51:42.983009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.981 ms 00:18:12.374 [2024-12-13 23:51:42.983021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:42.983109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:42.983118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.374 [2024-12-13 23:51:42.983127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:12.374 [2024-12-13 23:51:42.983137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.027288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.027493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.374 [2024-12-13 23:51:43.027512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.107 ms 00:18:12.374 [2024-12-13 23:51:43.027520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.027563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.027573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.374 [2024-12-13 23:51:43.027582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:12.374 [2024-12-13 23:51:43.027589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.028096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.028120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.374 [2024-12-13 23:51:43.028130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:18:12.374 [2024-12-13 23:51:43.028142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.028265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.028275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.374 [2024-12-13 23:51:43.028284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:12.374 [2024-12-13 23:51:43.028292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.044064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.044095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.374 [2024-12-13 23:51:43.044106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.751 ms 00:18:12.374 [2024-12-13 23:51:43.044114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.057736] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:12.374 [2024-12-13 23:51:43.057772] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.374 [2024-12-13 23:51:43.057783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.057791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.374 [2024-12-13 23:51:43.057800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.576 ms 00:18:12.374 [2024-12-13 23:51:43.057808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.082710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.082855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.374 [2024-12-13 23:51:43.082873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.862 ms 00:18:12.374 [2024-12-13 23:51:43.082881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.374 [2024-12-13 23:51:43.095340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.374 [2024-12-13 23:51:43.095373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.374 [2024-12-13 23:51:43.095383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.424 ms 00:18:12.374 [2024-12-13 23:51:43.095391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.107134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.107165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.635 [2024-12-13 23:51:43.107183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.708 ms 00:18:12.635 [2024-12-13 23:51:43.107191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.107568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.107582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.635 [2024-12-13 23:51:43.107591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:12.635 [2024-12-13 23:51:43.107598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.171342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.171393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:12.635 [2024-12-13 23:51:43.171406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.726 ms 00:18:12.635 [2024-12-13 23:51:43.171415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.182769] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:12.635 [2024-12-13 23:51:43.185875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.186081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:12.635 [2024-12-13 23:51:43.186102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.404 ms 00:18:12.635 [2024-12-13 23:51:43.186111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.186193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.186204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:12.635 [2024-12-13 23:51:43.186216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:12.635 [2024-12-13 23:51:43.186224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.186298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.186309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:12.635 [2024-12-13 23:51:43.186319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:12.635 [2024-12-13 23:51:43.186327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.187896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.187945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:12.635 [2024-12-13 23:51:43.187955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:18:12.635 [2024-12-13 23:51:43.187963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.188002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.188012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:12.635 [2024-12-13 23:51:43.188021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:12.635 [2024-12-13 23:51:43.188035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.635 [2024-12-13 23:51:43.188076] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:12.635 [2024-12-13 23:51:43.188086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.635 [2024-12-13 23:51:43.188095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:12.635 [2024-12-13 23:51:43.188105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:12.635 [2024-12-13 23:51:43.188114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.636 [2024-12-13 23:51:43.214962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.636 [2024-12-13 23:51:43.215013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:12.636 [2024-12-13 23:51:43.215027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.826 ms 00:18:12.636 [2024-12-13 23:51:43.215036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.636 [2024-12-13 23:51:43.215126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.636 [2024-12-13 23:51:43.215144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:12.636 [2024-12-13 23:51:43.215154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:12.636 [2024-12-13 23:51:43.215163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.636 [2024-12-13 23:51:43.216658] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.816 ms, result 0 00:18:13.576  [2024-12-13T23:51:45.254Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-13T23:51:46.637Z] Copying: 39/1024 [MB] (20 MBps) [2024-12-13T23:51:47.579Z] Copying: 59/1024 [MB] (19 MBps) [2024-12-13T23:51:48.566Z] Copying: 79/1024 [MB] (20 MBps) [2024-12-13T23:51:49.532Z] Copying: 90/1024 [MB] (10 MBps) [2024-12-13T23:51:50.476Z] Copying: 103/1024 [MB] (13 MBps) [2024-12-13T23:51:51.416Z] Copying: 121/1024 [MB] (18 MBps) [2024-12-13T23:51:52.360Z] Copying: 143/1024 [MB] (21 MBps) [2024-12-13T23:51:53.304Z] Copying: 162/1024 [MB] (18 MBps) [2024-12-13T23:51:54.248Z] Copying: 180/1024 [MB] (17 MBps) [2024-12-13T23:51:55.621Z] Copying: 192/1024 [MB] (12 MBps) [2024-12-13T23:51:56.562Z] Copying: 234/1024 [MB] (41 MBps) [2024-12-13T23:51:57.507Z] Copying: 267/1024 [MB] (33 MBps) [2024-12-13T23:51:58.451Z] Copying: 287/1024 [MB] (19 MBps) [2024-12-13T23:51:59.391Z] Copying: 307/1024 [MB] (19 MBps) [2024-12-13T23:52:00.336Z] Copying: 328/1024 [MB] (21 MBps) [2024-12-13T23:52:01.275Z] Copying: 345/1024 [MB] (16 MBps) [2024-12-13T23:52:02.662Z] Copying: 376/1024 [MB] (31 MBps) [2024-12-13T23:52:03.233Z] Copying: 391/1024 [MB] (15 MBps) [2024-12-13T23:52:04.620Z] Copying: 405/1024 [MB] (14 MBps) [2024-12-13T23:52:05.566Z] Copying: 417/1024 [MB] (11 MBps) [2024-12-13T23:52:06.511Z] Copying: 427/1024 [MB] (10 MBps) [2024-12-13T23:52:07.452Z] Copying: 440/1024 [MB] (13 MBps) [2024-12-13T23:52:08.392Z] Copying: 454/1024 [MB] (13 MBps) [2024-12-13T23:52:09.336Z] Copying: 476/1024 [MB] (22 MBps) [2024-12-13T23:52:10.280Z] Copying: 495/1024 [MB] (18 MBps) [2024-12-13T23:52:11.366Z] Copying: 506/1024 [MB] (11 MBps) [2024-12-13T23:52:12.315Z] Copying: 524/1024 [MB] (18 MBps) [2024-12-13T23:52:13.258Z] Copying: 540/1024 [MB] (16 MBps) [2024-12-13T23:52:14.648Z] Copying: 561/1024 [MB] (20 MBps) [2024-12-13T23:52:15.595Z] Copying: 577/1024 [MB] (16 MBps) [2024-12-13T23:52:16.541Z] Copying: 589/1024 [MB] (11 MBps) [2024-12-13T23:52:17.487Z] Copying: 599/1024 [MB] (10 MBps) [2024-12-13T23:52:18.424Z] Copying: 624100/1048576 [kB] (10228 kBps) [2024-12-13T23:52:19.364Z] Copying: 626/1024 [MB] (16 MBps) [2024-12-13T23:52:20.306Z] Copying: 666/1024 [MB] (39 MBps) [2024-12-13T23:52:21.255Z] Copying: 681/1024 [MB] (15 MBps) [2024-12-13T23:52:22.637Z] Copying: 694/1024 [MB] (13 MBps) [2024-12-13T23:52:23.570Z] Copying: 712/1024 [MB] (17 MBps) [2024-12-13T23:52:24.503Z] Copying: 748/1024 [MB] (36 MBps) [2024-12-13T23:52:25.438Z] Copying: 779/1024 [MB] (30 MBps) [2024-12-13T23:52:26.379Z] Copying: 819/1024 [MB] (39 MBps) [2024-12-13T23:52:27.323Z] Copying: 836/1024 [MB] (16 MBps) [2024-12-13T23:52:28.258Z] Copying: 850/1024 [MB] (14 MBps) [2024-12-13T23:52:29.641Z] Copying: 886/1024 [MB] (36 MBps) [2024-12-13T23:52:30.585Z] Copying: 912/1024 [MB] (25 MBps) [2024-12-13T23:52:31.530Z] Copying: 930/1024 [MB] (18 MBps) [2024-12-13T23:52:32.472Z] Copying: 950/1024 [MB] (19 MBps) [2024-12-13T23:52:33.416Z] Copying: 967/1024 [MB] (17 MBps) [2024-12-13T23:52:34.356Z] Copying: 981/1024 [MB] (13 MBps) [2024-12-13T23:52:35.300Z] Copying: 998/1024 [MB] (17 MBps) [2024-12-13T23:52:35.875Z] Copying: 1019/1024 [MB] (20 MBps) [2024-12-13T23:52:35.875Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-13 23:52:35.618331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.618397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:05.143 [2024-12-13 23:52:35.618414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:05.143 [2024-12-13 23:52:35.618423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.618446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.143 [2024-12-13 23:52:35.621651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.621696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:05.143 [2024-12-13 23:52:35.621715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:19:05.143 [2024-12-13 23:52:35.621724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.624857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.624903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:05.143 [2024-12-13 23:52:35.624914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:19:05.143 [2024-12-13 23:52:35.624923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.645760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.645808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:05.143 [2024-12-13 23:52:35.645820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.819 ms 00:19:05.143 [2024-12-13 23:52:35.645837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.651990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.652189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:05.143 [2024-12-13 23:52:35.652211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:19:05.143 [2024-12-13 23:52:35.652221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.680454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.680516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:05.143 [2024-12-13 23:52:35.680529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.159 ms 00:19:05.143 [2024-12-13 23:52:35.680538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.702896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.703128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:05.143 [2024-12-13 23:52:35.703155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.311 ms 00:19:05.143 [2024-12-13 23:52:35.703165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.703584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.703604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:05.143 [2024-12-13 23:52:35.703616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:05.143 [2024-12-13 23:52:35.703625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.730967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.731016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:05.143 [2024-12-13 23:52:35.731027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.323 ms 00:19:05.143 [2024-12-13 23:52:35.731035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.757924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.758133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:05.143 [2024-12-13 23:52:35.758155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.839 ms 00:19:05.143 [2024-12-13 23:52:35.758178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.783908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.783958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:05.143 [2024-12-13 23:52:35.783970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.689 ms 00:19:05.143 [2024-12-13 23:52:35.783977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.809969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.143 [2024-12-13 23:52:35.810016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:05.143 [2024-12-13 23:52:35.810027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.880 ms 00:19:05.143 [2024-12-13 23:52:35.810034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.143 [2024-12-13 23:52:35.810084] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:05.143 [2024-12-13 23:52:35.810102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:05.143 [2024-12-13 23:52:35.810229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:05.144 [2024-12-13 23:52:35.810942] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:05.144 [2024-12-13 23:52:35.810950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ea95622-f05f-4568-a570-323b970f1ac5 00:19:05.144 [2024-12-13 23:52:35.810960] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:05.144 [2024-12-13 23:52:35.810968] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:05.144 [2024-12-13 23:52:35.810976] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:05.144 [2024-12-13 23:52:35.810985] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:05.144 [2024-12-13 23:52:35.810993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:05.144 [2024-12-13 23:52:35.811001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:05.144 [2024-12-13 23:52:35.811009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:05.144 [2024-12-13 23:52:35.811015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:05.145 [2024-12-13 23:52:35.811033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:05.145 [2024-12-13 23:52:35.811040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.145 [2024-12-13 23:52:35.811048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:05.145 [2024-12-13 23:52:35.811056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:19:05.145 [2024-12-13 23:52:35.811066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.145 [2024-12-13 23:52:35.826188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.145 [2024-12-13 23:52:35.826234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:05.145 [2024-12-13 23:52:35.826246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.070 ms 00:19:05.145 [2024-12-13 23:52:35.826254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.145 [2024-12-13 23:52:35.826523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.145 [2024-12-13 23:52:35.826535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:05.145 [2024-12-13 23:52:35.826551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:05.145 [2024-12-13 23:52:35.826559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.145 [2024-12-13 23:52:35.868935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.145 [2024-12-13 23:52:35.868990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.145 [2024-12-13 23:52:35.869003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.145 [2024-12-13 23:52:35.869012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.145 [2024-12-13 23:52:35.869083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.145 [2024-12-13 23:52:35.869092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.145 [2024-12-13 23:52:35.869108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.145 [2024-12-13 23:52:35.869117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.145 [2024-12-13 23:52:35.869214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.145 [2024-12-13 23:52:35.869226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.145 [2024-12-13 23:52:35.869244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.145 [2024-12-13 23:52:35.869251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.145 [2024-12-13 23:52:35.869269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.145 [2024-12-13 23:52:35.869278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.145 [2024-12-13 23:52:35.869286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.145 [2024-12-13 23:52:35.869297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.958084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.958147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.406 [2024-12-13 23:52:35.958160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.958169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.992958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.406 [2024-12-13 23:52:35.993019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.406 [2024-12-13 23:52:35.993141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.406 [2024-12-13 23:52:35.993219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.406 [2024-12-13 23:52:35.993372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:05.406 [2024-12-13 23:52:35.993437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.406 [2024-12-13 23:52:35.993558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.406 [2024-12-13 23:52:35.993643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.406 [2024-12-13 23:52:35.993652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.406 [2024-12-13 23:52:35.993661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.406 [2024-12-13 23:52:35.993829] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.448 ms, result 0 00:19:06.793 00:19:06.793 00:19:06.793 23:52:37 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:06.793 [2024-12-13 23:52:37.242178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:06.793 [2024-12-13 23:52:37.242337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74020 ] 00:19:06.793 [2024-12-13 23:52:37.397547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.054 [2024-12-13 23:52:37.661691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.316 [2024-12-13 23:52:37.982762] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.316 [2024-12-13 23:52:37.983176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.579 [2024-12-13 23:52:38.145574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.145803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.579 [2024-12-13 23:52:38.145829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:07.579 [2024-12-13 23:52:38.145850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.145925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.145936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.579 [2024-12-13 23:52:38.145947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:07.579 [2024-12-13 23:52:38.145954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.145977] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.579 [2024-12-13 23:52:38.146766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.579 [2024-12-13 23:52:38.146789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.146798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.579 [2024-12-13 23:52:38.146808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:19:07.579 [2024-12-13 23:52:38.146819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.148630] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:07.579 [2024-12-13 23:52:38.163394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.163464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:07.579 [2024-12-13 23:52:38.163501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.767 ms 00:19:07.579 [2024-12-13 23:52:38.163511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.163593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.163604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:07.579 [2024-12-13 23:52:38.163614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:07.579 [2024-12-13 23:52:38.163622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.172041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.172091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.579 [2024-12-13 23:52:38.172101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.302 ms 00:19:07.579 [2024-12-13 23:52:38.172109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.172207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.172217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.579 [2024-12-13 23:52:38.172226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:07.579 [2024-12-13 23:52:38.172234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.172282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.172292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.579 [2024-12-13 23:52:38.172301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:07.579 [2024-12-13 23:52:38.172308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.172340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.579 [2024-12-13 23:52:38.176753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.176794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.579 [2024-12-13 23:52:38.176804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.425 ms 00:19:07.579 [2024-12-13 23:52:38.176813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.176851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.176860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.579 [2024-12-13 23:52:38.176869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:07.579 [2024-12-13 23:52:38.176880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.176935] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:07.579 [2024-12-13 23:52:38.176958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:07.579 [2024-12-13 23:52:38.176992] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:07.579 [2024-12-13 23:52:38.177008] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:07.579 [2024-12-13 23:52:38.177084] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:07.579 [2024-12-13 23:52:38.177095] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.579 [2024-12-13 23:52:38.177108] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:07.579 [2024-12-13 23:52:38.177119] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.579 [2024-12-13 23:52:38.177129] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.579 [2024-12-13 23:52:38.177137] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:07.579 [2024-12-13 23:52:38.177145] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.579 [2024-12-13 23:52:38.177153] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:07.579 [2024-12-13 23:52:38.177161] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:07.579 [2024-12-13 23:52:38.177169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.177177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.579 [2024-12-13 23:52:38.177185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:19:07.579 [2024-12-13 23:52:38.177192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.177255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.579 [2024-12-13 23:52:38.177263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.579 [2024-12-13 23:52:38.177271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:07.579 [2024-12-13 23:52:38.177278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.579 [2024-12-13 23:52:38.177350] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.579 [2024-12-13 23:52:38.177361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.580 [2024-12-13 23:52:38.177369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.580 [2024-12-13 23:52:38.177392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.580 [2024-12-13 23:52:38.177416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.580 [2024-12-13 23:52:38.177430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.580 [2024-12-13 23:52:38.177439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:07.580 [2024-12-13 23:52:38.177446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.580 [2024-12-13 23:52:38.177453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.580 [2024-12-13 23:52:38.177460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:07.580 [2024-12-13 23:52:38.177467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.580 [2024-12-13 23:52:38.177515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:07.580 [2024-12-13 23:52:38.177522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:07.580 [2024-12-13 23:52:38.177537] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:07.580 [2024-12-13 23:52:38.177544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.580 [2024-12-13 23:52:38.177559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.580 [2024-12-13 23:52:38.177579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.580 [2024-12-13 23:52:38.177601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.580 [2024-12-13 23:52:38.177622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.580 [2024-12-13 23:52:38.177642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.580 [2024-12-13 23:52:38.177656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.580 [2024-12-13 23:52:38.177663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:07.580 [2024-12-13 23:52:38.177670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.580 [2024-12-13 23:52:38.177676] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.580 [2024-12-13 23:52:38.177688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.580 [2024-12-13 23:52:38.177695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.580 [2024-12-13 23:52:38.177715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.580 [2024-12-13 23:52:38.177722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.580 [2024-12-13 23:52:38.177730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.580 [2024-12-13 23:52:38.177737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.580 [2024-12-13 23:52:38.177744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.580 [2024-12-13 23:52:38.177751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.580 [2024-12-13 23:52:38.177758] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.580 [2024-12-13 23:52:38.177768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.580 [2024-12-13 23:52:38.177777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:07.580 [2024-12-13 23:52:38.177784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:07.580 [2024-12-13 23:52:38.177796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:07.580 [2024-12-13 23:52:38.177803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:07.580 [2024-12-13 23:52:38.177819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:07.580 [2024-12-13 23:52:38.177827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:07.580 [2024-12-13 23:52:38.177834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:07.580 [2024-12-13 23:52:38.177841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:07.580 [2024-12-13 23:52:38.177849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:07.580 [2024-12-13 23:52:38.177856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:07.580 [2024-12-13 23:52:38.177864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:07.580 [2024-12-13 23:52:38.177871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:07.580 [2024-12-13 23:52:38.177879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:07.580 [2024-12-13 23:52:38.177886] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.580 [2024-12-13 23:52:38.177894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.580 [2024-12-13 23:52:38.177902] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.580 [2024-12-13 23:52:38.177909] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.580 [2024-12-13 23:52:38.177915] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.581 [2024-12-13 23:52:38.177922] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.581 [2024-12-13 23:52:38.177930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.177938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.581 [2024-12-13 23:52:38.177946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:19:07.581 [2024-12-13 23:52:38.177953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.196924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.196974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.581 [2024-12-13 23:52:38.196987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.927 ms 00:19:07.581 [2024-12-13 23:52:38.197004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.197096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.197105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.581 [2024-12-13 23:52:38.197114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:07.581 [2024-12-13 23:52:38.197123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.243610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.243692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.581 [2024-12-13 23:52:38.243707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.428 ms 00:19:07.581 [2024-12-13 23:52:38.243716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.243771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.243782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.581 [2024-12-13 23:52:38.243791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:07.581 [2024-12-13 23:52:38.243800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.244396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.244420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.581 [2024-12-13 23:52:38.244432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:19:07.581 [2024-12-13 23:52:38.244447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.244620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.244653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.581 [2024-12-13 23:52:38.244663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:07.581 [2024-12-13 23:52:38.244671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.261594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.261640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.581 [2024-12-13 23:52:38.261652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.894 ms 00:19:07.581 [2024-12-13 23:52:38.261660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.276083] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:07.581 [2024-12-13 23:52:38.276286] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:07.581 [2024-12-13 23:52:38.276306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.276315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:07.581 [2024-12-13 23:52:38.276327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.531 ms 00:19:07.581 [2024-12-13 23:52:38.276334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.581 [2024-12-13 23:52:38.302598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.581 [2024-12-13 23:52:38.302804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:07.581 [2024-12-13 23:52:38.302828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.218 ms 00:19:07.581 [2024-12-13 23:52:38.302837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.843 [2024-12-13 23:52:38.316044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.843 [2024-12-13 23:52:38.316093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:07.843 [2024-12-13 23:52:38.316105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.159 ms 00:19:07.843 [2024-12-13 23:52:38.316113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.329045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.329102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:07.844 [2024-12-13 23:52:38.329114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.883 ms 00:19:07.844 [2024-12-13 23:52:38.329121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.329545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.329560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.844 [2024-12-13 23:52:38.329570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:07.844 [2024-12-13 23:52:38.329578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.396743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.396975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:07.844 [2024-12-13 23:52:38.397001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.147 ms 00:19:07.844 [2024-12-13 23:52:38.397011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.408414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:07.844 [2024-12-13 23:52:38.411755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.411799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.844 [2024-12-13 23:52:38.411813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.692 ms 00:19:07.844 [2024-12-13 23:52:38.411828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.411906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.411916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:07.844 [2024-12-13 23:52:38.411926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:07.844 [2024-12-13 23:52:38.411933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.412000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.412011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.844 [2024-12-13 23:52:38.412019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:07.844 [2024-12-13 23:52:38.412028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.413438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.413504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:07.844 [2024-12-13 23:52:38.413516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:19:07.844 [2024-12-13 23:52:38.413524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.413562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.413571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.844 [2024-12-13 23:52:38.413587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:07.844 [2024-12-13 23:52:38.413595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.413633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:07.844 [2024-12-13 23:52:38.413644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.413656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:07.844 [2024-12-13 23:52:38.413664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:07.844 [2024-12-13 23:52:38.413672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.440393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.440446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.844 [2024-12-13 23:52:38.440460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.699 ms 00:19:07.844 [2024-12-13 23:52:38.440469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.440589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.844 [2024-12-13 23:52:38.440599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.844 [2024-12-13 23:52:38.440608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:07.844 [2024-12-13 23:52:38.440618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.844 [2024-12-13 23:52:38.442047] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.988 ms, result 0 00:19:09.232  [2024-12-13T23:52:40.912Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-13T23:52:41.931Z] Copying: 26/1024 [MB] (10 MBps) [2024-12-13T23:52:42.897Z] Copying: 38/1024 [MB] (11 MBps) [2024-12-13T23:52:43.838Z] Copying: 53/1024 [MB] (15 MBps) [2024-12-13T23:52:44.778Z] Copying: 69/1024 [MB] (16 MBps) [2024-12-13T23:52:45.713Z] Copying: 80/1024 [MB] (10 MBps) [2024-12-13T23:52:46.652Z] Copying: 95/1024 [MB] (14 MBps) [2024-12-13T23:52:48.039Z] Copying: 115/1024 [MB] (20 MBps) [2024-12-13T23:52:48.983Z] Copying: 133/1024 [MB] (17 MBps) [2024-12-13T23:52:49.925Z] Copying: 144/1024 [MB] (11 MBps) [2024-12-13T23:52:50.869Z] Copying: 158/1024 [MB] (13 MBps) [2024-12-13T23:52:51.812Z] Copying: 177/1024 [MB] (18 MBps) [2024-12-13T23:52:52.754Z] Copying: 187/1024 [MB] (10 MBps) [2024-12-13T23:52:53.697Z] Copying: 211/1024 [MB] (23 MBps) [2024-12-13T23:52:54.641Z] Copying: 232/1024 [MB] (20 MBps) [2024-12-13T23:52:56.029Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-13T23:52:56.973Z] Copying: 253/1024 [MB] (10 MBps) [2024-12-13T23:52:57.916Z] Copying: 265/1024 [MB] (11 MBps) [2024-12-13T23:52:58.859Z] Copying: 279/1024 [MB] (14 MBps) [2024-12-13T23:52:59.802Z] Copying: 291/1024 [MB] (11 MBps) [2024-12-13T23:53:00.745Z] Copying: 306/1024 [MB] (14 MBps) [2024-12-13T23:53:01.689Z] Copying: 324/1024 [MB] (18 MBps) [2024-12-13T23:53:02.635Z] Copying: 349/1024 [MB] (25 MBps) [2024-12-13T23:53:04.017Z] Copying: 362/1024 [MB] (12 MBps) [2024-12-13T23:53:04.962Z] Copying: 379/1024 [MB] (17 MBps) [2024-12-13T23:53:05.992Z] Copying: 401/1024 [MB] (21 MBps) [2024-12-13T23:53:06.936Z] Copying: 416/1024 [MB] (14 MBps) [2024-12-13T23:53:07.879Z] Copying: 434/1024 [MB] (18 MBps) [2024-12-13T23:53:08.819Z] Copying: 456/1024 [MB] (22 MBps) [2024-12-13T23:53:09.762Z] Copying: 476/1024 [MB] (20 MBps) [2024-12-13T23:53:10.704Z] Copying: 494/1024 [MB] (17 MBps) [2024-12-13T23:53:11.647Z] Copying: 505/1024 [MB] (10 MBps) [2024-12-13T23:53:13.032Z] Copying: 516/1024 [MB] (11 MBps) [2024-12-13T23:53:13.977Z] Copying: 530/1024 [MB] (13 MBps) [2024-12-13T23:53:14.920Z] Copying: 547/1024 [MB] (17 MBps) [2024-12-13T23:53:15.866Z] Copying: 558/1024 [MB] (10 MBps) [2024-12-13T23:53:16.811Z] Copying: 576/1024 [MB] (17 MBps) [2024-12-13T23:53:17.757Z] Copying: 586/1024 [MB] (10 MBps) [2024-12-13T23:53:18.701Z] Copying: 596/1024 [MB] (10 MBps) [2024-12-13T23:53:19.647Z] Copying: 607/1024 [MB] (10 MBps) [2024-12-13T23:53:21.035Z] Copying: 617/1024 [MB] (10 MBps) [2024-12-13T23:53:21.978Z] Copying: 628/1024 [MB] (10 MBps) [2024-12-13T23:53:22.919Z] Copying: 638/1024 [MB] (10 MBps) [2024-12-13T23:53:23.861Z] Copying: 648/1024 [MB] (10 MBps) [2024-12-13T23:53:24.802Z] Copying: 659/1024 [MB] (10 MBps) [2024-12-13T23:53:25.751Z] Copying: 674/1024 [MB] (14 MBps) [2024-12-13T23:53:26.695Z] Copying: 692/1024 [MB] (18 MBps) [2024-12-13T23:53:27.637Z] Copying: 703/1024 [MB] (10 MBps) [2024-12-13T23:53:29.052Z] Copying: 718/1024 [MB] (15 MBps) [2024-12-13T23:53:29.996Z] Copying: 728/1024 [MB] (10 MBps) [2024-12-13T23:53:30.938Z] Copying: 742/1024 [MB] (14 MBps) [2024-12-13T23:53:31.884Z] Copying: 762/1024 [MB] (20 MBps) [2024-12-13T23:53:32.881Z] Copying: 778/1024 [MB] (15 MBps) [2024-12-13T23:53:33.826Z] Copying: 794/1024 [MB] (16 MBps) [2024-12-13T23:53:34.771Z] Copying: 815/1024 [MB] (21 MBps) [2024-12-13T23:53:35.715Z] Copying: 831/1024 [MB] (16 MBps) [2024-12-13T23:53:36.658Z] Copying: 852/1024 [MB] (20 MBps) [2024-12-13T23:53:38.045Z] Copying: 865/1024 [MB] (13 MBps) [2024-12-13T23:53:38.989Z] Copying: 883/1024 [MB] (18 MBps) [2024-12-13T23:53:39.934Z] Copying: 898/1024 [MB] (14 MBps) [2024-12-13T23:53:40.879Z] Copying: 918/1024 [MB] (19 MBps) [2024-12-13T23:53:41.822Z] Copying: 933/1024 [MB] (15 MBps) [2024-12-13T23:53:42.766Z] Copying: 944/1024 [MB] (10 MBps) [2024-12-13T23:53:43.705Z] Copying: 954/1024 [MB] (10 MBps) [2024-12-13T23:53:44.647Z] Copying: 965/1024 [MB] (10 MBps) [2024-12-13T23:53:46.036Z] Copying: 984/1024 [MB] (19 MBps) [2024-12-13T23:53:46.980Z] Copying: 998/1024 [MB] (13 MBps) [2024-12-13T23:53:47.923Z] Copying: 1010/1024 [MB] (12 MBps) [2024-12-13T23:53:47.923Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-13T23:53:48.187Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-13 23:53:48.016259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.016369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.455 [2024-12-13 23:53:48.016393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:17.455 [2024-12-13 23:53:48.016408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.016448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.455 [2024-12-13 23:53:48.021163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.021229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.455 [2024-12-13 23:53:48.021246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.690 ms 00:20:17.455 [2024-12-13 23:53:48.021259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.021738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.021760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.455 [2024-12-13 23:53:48.021776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:20:17.455 [2024-12-13 23:53:48.021789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.026210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.026235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.455 [2024-12-13 23:53:48.026251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:20:17.455 [2024-12-13 23:53:48.026260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.032386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.032423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:17.455 [2024-12-13 23:53:48.032434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:20:17.455 [2024-12-13 23:53:48.032441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.061101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.061149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.455 [2024-12-13 23:53:48.061162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.557 ms 00:20:17.455 [2024-12-13 23:53:48.061170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.077624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.077674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.455 [2024-12-13 23:53:48.077687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.406 ms 00:20:17.455 [2024-12-13 23:53:48.077701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.077865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.077878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.455 [2024-12-13 23:53:48.077887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:17.455 [2024-12-13 23:53:48.077895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.103979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.104176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:17.455 [2024-12-13 23:53:48.104196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:20:17.455 [2024-12-13 23:53:48.104203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.129684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.129731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:17.455 [2024-12-13 23:53:48.129756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.364 ms 00:20:17.455 [2024-12-13 23:53:48.129763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.154432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.154476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.455 [2024-12-13 23:53:48.154505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.623 ms 00:20:17.455 [2024-12-13 23:53:48.154513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.179409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.455 [2024-12-13 23:53:48.179456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.455 [2024-12-13 23:53:48.179469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.809 ms 00:20:17.455 [2024-12-13 23:53:48.179476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.455 [2024-12-13 23:53:48.179536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.455 [2024-12-13 23:53:48.179560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.455 [2024-12-13 23:53:48.179957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.179966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.179973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.179989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.179997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.456 [2024-12-13 23:53:48.180381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.456 [2024-12-13 23:53:48.180389] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ea95622-f05f-4568-a570-323b970f1ac5 00:20:17.456 [2024-12-13 23:53:48.180397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.456 [2024-12-13 23:53:48.180405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.456 [2024-12-13 23:53:48.180413] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.456 [2024-12-13 23:53:48.180421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.456 [2024-12-13 23:53:48.180428] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.456 [2024-12-13 23:53:48.180436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.456 [2024-12-13 23:53:48.180444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.456 [2024-12-13 23:53:48.180458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.456 [2024-12-13 23:53:48.180465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.456 [2024-12-13 23:53:48.180472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-13 23:53:48.180492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.456 [2024-12-13 23:53:48.180504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:20:17.456 [2024-12-13 23:53:48.180511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.717 [2024-12-13 23:53:48.194170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.717 [2024-12-13 23:53:48.194353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.717 [2024-12-13 23:53:48.194371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.623 ms 00:20:17.717 [2024-12-13 23:53:48.194379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.717 [2024-12-13 23:53:48.194633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.717 [2024-12-13 23:53:48.194651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.717 [2024-12-13 23:53:48.194661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:20:17.717 [2024-12-13 23:53:48.194669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.717 [2024-12-13 23:53:48.233497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.717 [2024-12-13 23:53:48.233543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.717 [2024-12-13 23:53:48.233554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.717 [2024-12-13 23:53:48.233563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.717 [2024-12-13 23:53:48.233627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.717 [2024-12-13 23:53:48.233642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.717 [2024-12-13 23:53:48.233651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.717 [2024-12-13 23:53:48.233658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.717 [2024-12-13 23:53:48.233734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.717 [2024-12-13 23:53:48.233745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.717 [2024-12-13 23:53:48.233754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.717 [2024-12-13 23:53:48.233762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.233779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.233788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.718 [2024-12-13 23:53:48.233800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.233808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.315176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.315225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.718 [2024-12-13 23:53:48.315238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.315246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.347406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.347454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.718 [2024-12-13 23:53:48.347472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.347503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.347574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.347585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.718 [2024-12-13 23:53:48.347593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.347601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.347645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.347655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.718 [2024-12-13 23:53:48.347664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.347676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.347802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.347813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.718 [2024-12-13 23:53:48.347823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.347832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.347862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.347871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.718 [2024-12-13 23:53:48.347879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.347888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.347936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.347944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.718 [2024-12-13 23:53:48.347953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.347960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.348006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.718 [2024-12-13 23:53:48.348016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.718 [2024-12-13 23:53:48.348025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.718 [2024-12-13 23:53:48.348037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.718 [2024-12-13 23:53:48.348170] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.894 ms, result 0 00:20:18.662 00:20:18.662 00:20:18.662 23:53:49 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:21.212 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:21.212 23:53:51 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:21.212 [2024-12-13 23:53:51.548344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.212 [2024-12-13 23:53:51.548731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74790 ] 00:20:21.212 [2024-12-13 23:53:51.697173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.212 [2024-12-13 23:53:51.921193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.785 [2024-12-13 23:53:52.206748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.785 [2024-12-13 23:53:52.206829] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.785 [2024-12-13 23:53:52.360477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.785 [2024-12-13 23:53:52.360556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:21.785 [2024-12-13 23:53:52.360571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:21.785 [2024-12-13 23:53:52.360582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.785 [2024-12-13 23:53:52.360635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.785 [2024-12-13 23:53:52.360647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.785 [2024-12-13 23:53:52.360655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:21.785 [2024-12-13 23:53:52.360664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.785 [2024-12-13 23:53:52.360685] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:21.785 [2024-12-13 23:53:52.361442] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:21.785 [2024-12-13 23:53:52.361467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.785 [2024-12-13 23:53:52.361476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.785 [2024-12-13 23:53:52.361502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:20:21.785 [2024-12-13 23:53:52.361509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.785 [2024-12-13 23:53:52.363204] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:21.785 [2024-12-13 23:53:52.377776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.785 [2024-12-13 23:53:52.377827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:21.785 [2024-12-13 23:53:52.377841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.575 ms 00:20:21.785 [2024-12-13 23:53:52.377848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.785 [2024-12-13 23:53:52.377925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.377935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:21.786 [2024-12-13 23:53:52.377943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:21.786 [2024-12-13 23:53:52.377950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.386117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.386161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.786 [2024-12-13 23:53:52.386171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.090 ms 00:20:21.786 [2024-12-13 23:53:52.386179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.386274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.386283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.786 [2024-12-13 23:53:52.386292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:21.786 [2024-12-13 23:53:52.386300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.386345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.386354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:21.786 [2024-12-13 23:53:52.386362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:21.786 [2024-12-13 23:53:52.386369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.386398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.786 [2024-12-13 23:53:52.390647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.390686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.786 [2024-12-13 23:53:52.390696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:20:21.786 [2024-12-13 23:53:52.390703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.390741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.390749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:21.786 [2024-12-13 23:53:52.390758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:21.786 [2024-12-13 23:53:52.390768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.390819] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:21.786 [2024-12-13 23:53:52.390840] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:21.786 [2024-12-13 23:53:52.390875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:21.786 [2024-12-13 23:53:52.390891] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:21.786 [2024-12-13 23:53:52.390967] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:21.786 [2024-12-13 23:53:52.390977] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:21.786 [2024-12-13 23:53:52.390991] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:21.786 [2024-12-13 23:53:52.391002] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391010] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391019] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:21.786 [2024-12-13 23:53:52.391027] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:21.786 [2024-12-13 23:53:52.391035] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:21.786 [2024-12-13 23:53:52.391044] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:21.786 [2024-12-13 23:53:52.391053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.391061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:21.786 [2024-12-13 23:53:52.391069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:20:21.786 [2024-12-13 23:53:52.391076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.391138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.786 [2024-12-13 23:53:52.391147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:21.786 [2024-12-13 23:53:52.391154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:21.786 [2024-12-13 23:53:52.391161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.786 [2024-12-13 23:53:52.391232] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:21.786 [2024-12-13 23:53:52.391242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:21.786 [2024-12-13 23:53:52.391250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:21.786 [2024-12-13 23:53:52.391272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:21.786 [2024-12-13 23:53:52.391294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.786 [2024-12-13 23:53:52.391309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:21.786 [2024-12-13 23:53:52.391316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:21.786 [2024-12-13 23:53:52.391324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.786 [2024-12-13 23:53:52.391331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:21.786 [2024-12-13 23:53:52.391338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:21.786 [2024-12-13 23:53:52.391346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:21.786 [2024-12-13 23:53:52.391367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:21.786 [2024-12-13 23:53:52.391373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:21.786 [2024-12-13 23:53:52.391387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:21.786 [2024-12-13 23:53:52.391393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:21.786 [2024-12-13 23:53:52.391407] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:21.786 [2024-12-13 23:53:52.391427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:21.786 [2024-12-13 23:53:52.391446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:21.786 [2024-12-13 23:53:52.391465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:21.786 [2024-12-13 23:53:52.391514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.786 [2024-12-13 23:53:52.391527] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:21.786 [2024-12-13 23:53:52.391533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:21.786 [2024-12-13 23:53:52.391539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.786 [2024-12-13 23:53:52.391545] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:21.786 [2024-12-13 23:53:52.391556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:21.786 [2024-12-13 23:53:52.391563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.786 [2024-12-13 23:53:52.391581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:21.786 [2024-12-13 23:53:52.391589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:21.786 [2024-12-13 23:53:52.391597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:21.786 [2024-12-13 23:53:52.391604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:21.786 [2024-12-13 23:53:52.391610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:21.786 [2024-12-13 23:53:52.391617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:21.786 [2024-12-13 23:53:52.391625] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:21.786 [2024-12-13 23:53:52.391635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.786 [2024-12-13 23:53:52.391644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:21.786 [2024-12-13 23:53:52.391651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:21.786 [2024-12-13 23:53:52.391658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:21.786 [2024-12-13 23:53:52.391665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:21.786 [2024-12-13 23:53:52.391672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:21.786 [2024-12-13 23:53:52.391679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:21.786 [2024-12-13 23:53:52.391686] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:21.786 [2024-12-13 23:53:52.391693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:21.787 [2024-12-13 23:53:52.391700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:21.787 [2024-12-13 23:53:52.391707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:21.787 [2024-12-13 23:53:52.391714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:21.787 [2024-12-13 23:53:52.391737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:21.787 [2024-12-13 23:53:52.391745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:21.787 [2024-12-13 23:53:52.391751] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:21.787 [2024-12-13 23:53:52.391759] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.787 [2024-12-13 23:53:52.391768] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:21.787 [2024-12-13 23:53:52.391775] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:21.787 [2024-12-13 23:53:52.391782] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:21.787 [2024-12-13 23:53:52.391790] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:21.787 [2024-12-13 23:53:52.391799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.391807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:21.787 [2024-12-13 23:53:52.391815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:20:21.787 [2024-12-13 23:53:52.391823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.410221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.410411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.787 [2024-12-13 23:53:52.410840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.357 ms 00:20:21.787 [2024-12-13 23:53:52.410908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.411067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.411102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:21.787 [2024-12-13 23:53:52.411162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:21.787 [2024-12-13 23:53:52.411186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.454000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.455397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.787 [2024-12-13 23:53:52.455628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.737 ms 00:20:21.787 [2024-12-13 23:53:52.455673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.455753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.455970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.787 [2024-12-13 23:53:52.455992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.787 [2024-12-13 23:53:52.456062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.456650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.456776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.787 [2024-12-13 23:53:52.456939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:20:21.787 [2024-12-13 23:53:52.456986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.457139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.457163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.787 [2024-12-13 23:53:52.457222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:21.787 [2024-12-13 23:53:52.457335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.473852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.474005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.787 [2024-12-13 23:53:52.474062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.476 ms 00:20:21.787 [2024-12-13 23:53:52.474085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.787 [2024-12-13 23:53:52.488323] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:21.787 [2024-12-13 23:53:52.488511] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:21.787 [2024-12-13 23:53:52.488577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.787 [2024-12-13 23:53:52.488598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:21.787 [2024-12-13 23:53:52.488619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.368 ms 00:20:21.787 [2024-12-13 23:53:52.488638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.049 [2024-12-13 23:53:52.514712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.049 [2024-12-13 23:53:52.514874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.049 [2024-12-13 23:53:52.514935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.972 ms 00:20:22.049 [2024-12-13 23:53:52.514958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.049 [2024-12-13 23:53:52.528253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.049 [2024-12-13 23:53:52.528405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.049 [2024-12-13 23:53:52.528459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.208 ms 00:20:22.049 [2024-12-13 23:53:52.528469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.541047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.541100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.050 [2024-12-13 23:53:52.541112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.459 ms 00:20:22.050 [2024-12-13 23:53:52.541119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.541527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.541542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.050 [2024-12-13 23:53:52.541551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:22.050 [2024-12-13 23:53:52.541580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.607032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.607087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.050 [2024-12-13 23:53:52.607102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.432 ms 00:20:22.050 [2024-12-13 23:53:52.607110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.618391] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:22.050 [2024-12-13 23:53:52.621284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.621476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.050 [2024-12-13 23:53:52.621510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.117 ms 00:20:22.050 [2024-12-13 23:53:52.621525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.621597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.621608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.050 [2024-12-13 23:53:52.621617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:22.050 [2024-12-13 23:53:52.621625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.621691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.621701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.050 [2024-12-13 23:53:52.621709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:22.050 [2024-12-13 23:53:52.621717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.623050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.623092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:22.050 [2024-12-13 23:53:52.623104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:20:22.050 [2024-12-13 23:53:52.623112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.623145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.623154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.050 [2024-12-13 23:53:52.623169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:22.050 [2024-12-13 23:53:52.623176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.623212] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.050 [2024-12-13 23:53:52.623222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.623233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.050 [2024-12-13 23:53:52.623241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:22.050 [2024-12-13 23:53:52.623249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.649453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.649644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.050 [2024-12-13 23:53:52.649666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.185 ms 00:20:22.050 [2024-12-13 23:53:52.649675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.649756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.050 [2024-12-13 23:53:52.649767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.050 [2024-12-13 23:53:52.649776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:22.050 [2024-12-13 23:53:52.649784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.050 [2024-12-13 23:53:52.650960] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.992 ms, result 0 00:20:22.993  [2024-12-13T23:53:55.108Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-13T23:53:55.679Z] Copying: 54/1024 [MB] (32 MBps) [2024-12-13T23:53:57.069Z] Copying: 65/1024 [MB] (11 MBps) [2024-12-13T23:53:58.099Z] Copying: 79/1024 [MB] (14 MBps) [2024-12-13T23:53:58.671Z] Copying: 93/1024 [MB] (14 MBps) [2024-12-13T23:54:00.056Z] Copying: 105/1024 [MB] (11 MBps) [2024-12-13T23:54:01.001Z] Copying: 118/1024 [MB] (12 MBps) [2024-12-13T23:54:01.947Z] Copying: 130888/1048576 [kB] (9968 kBps) [2024-12-13T23:54:02.891Z] Copying: 138/1024 [MB] (10 MBps) [2024-12-13T23:54:03.832Z] Copying: 149/1024 [MB] (11 MBps) [2024-12-13T23:54:04.773Z] Copying: 160/1024 [MB] (10 MBps) [2024-12-13T23:54:05.714Z] Copying: 170/1024 [MB] (10 MBps) [2024-12-13T23:54:07.097Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-13T23:54:07.669Z] Copying: 198/1024 [MB] (18 MBps) [2024-12-13T23:54:09.057Z] Copying: 216/1024 [MB] (17 MBps) [2024-12-13T23:54:10.005Z] Copying: 235/1024 [MB] (19 MBps) [2024-12-13T23:54:10.949Z] Copying: 248/1024 [MB] (13 MBps) [2024-12-13T23:54:11.893Z] Copying: 266/1024 [MB] (18 MBps) [2024-12-13T23:54:12.834Z] Copying: 282/1024 [MB] (15 MBps) [2024-12-13T23:54:13.777Z] Copying: 302/1024 [MB] (19 MBps) [2024-12-13T23:54:14.722Z] Copying: 321/1024 [MB] (19 MBps) [2024-12-13T23:54:16.102Z] Copying: 333/1024 [MB] (12 MBps) [2024-12-13T23:54:16.669Z] Copying: 353/1024 [MB] (19 MBps) [2024-12-13T23:54:18.056Z] Copying: 377/1024 [MB] (23 MBps) [2024-12-13T23:54:18.998Z] Copying: 387/1024 [MB] (10 MBps) [2024-12-13T23:54:19.939Z] Copying: 398/1024 [MB] (11 MBps) [2024-12-13T23:54:20.880Z] Copying: 409/1024 [MB] (10 MBps) [2024-12-13T23:54:21.821Z] Copying: 420/1024 [MB] (11 MBps) [2024-12-13T23:54:22.765Z] Copying: 431/1024 [MB] (11 MBps) [2024-12-13T23:54:23.707Z] Copying: 447/1024 [MB] (16 MBps) [2024-12-13T23:54:24.741Z] Copying: 459/1024 [MB] (11 MBps) [2024-12-13T23:54:25.674Z] Copying: 485/1024 [MB] (26 MBps) [2024-12-13T23:54:27.047Z] Copying: 511/1024 [MB] (25 MBps) [2024-12-13T23:54:27.981Z] Copying: 562/1024 [MB] (50 MBps) [2024-12-13T23:54:28.917Z] Copying: 589/1024 [MB] (26 MBps) [2024-12-13T23:54:29.863Z] Copying: 613/1024 [MB] (24 MBps) [2024-12-13T23:54:30.798Z] Copying: 623/1024 [MB] (10 MBps) [2024-12-13T23:54:31.730Z] Copying: 646/1024 [MB] (22 MBps) [2024-12-13T23:54:32.666Z] Copying: 696/1024 [MB] (50 MBps) [2024-12-13T23:54:34.050Z] Copying: 741/1024 [MB] (45 MBps) [2024-12-13T23:54:34.991Z] Copying: 754/1024 [MB] (13 MBps) [2024-12-13T23:54:35.931Z] Copying: 770/1024 [MB] (15 MBps) [2024-12-13T23:54:36.872Z] Copying: 789/1024 [MB] (18 MBps) [2024-12-13T23:54:37.819Z] Copying: 806/1024 [MB] (17 MBps) [2024-12-13T23:54:38.762Z] Copying: 828/1024 [MB] (21 MBps) [2024-12-13T23:54:39.705Z] Copying: 845/1024 [MB] (16 MBps) [2024-12-13T23:54:41.092Z] Copying: 866/1024 [MB] (21 MBps) [2024-12-13T23:54:41.666Z] Copying: 885/1024 [MB] (19 MBps) [2024-12-13T23:54:43.052Z] Copying: 903/1024 [MB] (18 MBps) [2024-12-13T23:54:43.997Z] Copying: 925/1024 [MB] (22 MBps) [2024-12-13T23:54:44.941Z] Copying: 947/1024 [MB] (21 MBps) [2024-12-13T23:54:45.883Z] Copying: 966/1024 [MB] (18 MBps) [2024-12-13T23:54:46.828Z] Copying: 981/1024 [MB] (15 MBps) [2024-12-13T23:54:47.769Z] Copying: 999/1024 [MB] (18 MBps) [2024-12-13T23:54:48.712Z] Copying: 1017/1024 [MB] (17 MBps) [2024-12-13T23:54:48.973Z] Copying: 1048340/1048576 [kB] (6680 kBps) [2024-12-13T23:54:48.973Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-13 23:54:48.895470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.242 [2024-12-13 23:54:48.895541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:18.242 [2024-12-13 23:54:48.895556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:18.242 [2024-12-13 23:54:48.895564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.242 [2024-12-13 23:54:48.896098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:18.242 [2024-12-13 23:54:48.899685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.242 [2024-12-13 23:54:48.899719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:18.242 [2024-12-13 23:54:48.899730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:21:18.242 [2024-12-13 23:54:48.899740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.242 [2024-12-13 23:54:48.912717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.242 [2024-12-13 23:54:48.912751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:18.242 [2024-12-13 23:54:48.912770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.582 ms 00:21:18.242 [2024-12-13 23:54:48.912777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.242 [2024-12-13 23:54:48.933514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.242 [2024-12-13 23:54:48.933549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:18.242 [2024-12-13 23:54:48.933559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.721 ms 00:21:18.242 [2024-12-13 23:54:48.933567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.242 [2024-12-13 23:54:48.939691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.242 [2024-12-13 23:54:48.939720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:18.242 [2024-12-13 23:54:48.939730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:21:18.242 [2024-12-13 23:54:48.939743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.242 [2024-12-13 23:54:48.964615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.242 [2024-12-13 23:54:48.964650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.242 [2024-12-13 23:54:48.964661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.810 ms 00:21:18.242 [2024-12-13 23:54:48.964668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.503 [2024-12-13 23:54:48.979449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.503 [2024-12-13 23:54:48.979491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.503 [2024-12-13 23:54:48.979503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.747 ms 00:21:18.503 [2024-12-13 23:54:48.979510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.503 [2024-12-13 23:54:49.114501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.503 [2024-12-13 23:54:49.114550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:18.504 [2024-12-13 23:54:49.114562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 134.949 ms 00:21:18.504 [2024-12-13 23:54:49.114570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.504 [2024-12-13 23:54:49.140547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.504 [2024-12-13 23:54:49.140595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:18.504 [2024-12-13 23:54:49.140607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.954 ms 00:21:18.504 [2024-12-13 23:54:49.140614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.504 [2024-12-13 23:54:49.166323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.504 [2024-12-13 23:54:49.166368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:18.504 [2024-12-13 23:54:49.166394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.664 ms 00:21:18.504 [2024-12-13 23:54:49.166402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.504 [2024-12-13 23:54:49.191664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.504 [2024-12-13 23:54:49.191715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:18.504 [2024-12-13 23:54:49.191726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.216 ms 00:21:18.504 [2024-12-13 23:54:49.191733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.504 [2024-12-13 23:54:49.216908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.504 [2024-12-13 23:54:49.216952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:18.504 [2024-12-13 23:54:49.216964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.079 ms 00:21:18.504 [2024-12-13 23:54:49.216972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.504 [2024-12-13 23:54:49.217015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:18.504 [2024-12-13 23:54:49.217030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99072 / 261120 wr_cnt: 1 state: open 00:21:18.504 [2024-12-13 23:54:49.217041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:18.504 [2024-12-13 23:54:49.217632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:18.505 [2024-12-13 23:54:49.217854] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:18.505 [2024-12-13 23:54:49.217862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ea95622-f05f-4568-a570-323b970f1ac5 00:21:18.505 [2024-12-13 23:54:49.217872] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99072 00:21:18.505 [2024-12-13 23:54:49.217880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100032 00:21:18.505 [2024-12-13 23:54:49.217888] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99072 00:21:18.505 [2024-12-13 23:54:49.217901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:21:18.505 [2024-12-13 23:54:49.217908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:18.505 [2024-12-13 23:54:49.217916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:18.505 [2024-12-13 23:54:49.217923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:18.505 [2024-12-13 23:54:49.217937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:18.505 [2024-12-13 23:54:49.217944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:18.505 [2024-12-13 23:54:49.217952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.505 [2024-12-13 23:54:49.217960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:18.505 [2024-12-13 23:54:49.217968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:21:18.505 [2024-12-13 23:54:49.217975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.505 [2024-12-13 23:54:49.231532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.505 [2024-12-13 23:54:49.231574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:18.505 [2024-12-13 23:54:49.231586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.521 ms 00:21:18.505 [2024-12-13 23:54:49.231593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.505 [2024-12-13 23:54:49.231834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.505 [2024-12-13 23:54:49.231845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:18.505 [2024-12-13 23:54:49.231853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:18.505 [2024-12-13 23:54:49.231861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.764 [2024-12-13 23:54:49.270691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.764 [2024-12-13 23:54:49.270738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.764 [2024-12-13 23:54:49.270749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.764 [2024-12-13 23:54:49.270758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.764 [2024-12-13 23:54:49.270825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.764 [2024-12-13 23:54:49.270833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.764 [2024-12-13 23:54:49.270842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.270850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.270924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.270941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.765 [2024-12-13 23:54:49.270949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.270957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.270974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.270982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.765 [2024-12-13 23:54:49.270990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.270997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.351938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.351990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.765 [2024-12-13 23:54:49.352002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.352011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.383668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.383716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.765 [2024-12-13 23:54:49.383727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.383736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.383820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.383831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.765 [2024-12-13 23:54:49.383847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.383856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.383899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.383908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.765 [2024-12-13 23:54:49.383917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.383924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.384020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.384030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.765 [2024-12-13 23:54:49.384039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.384049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.384082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.384091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:18.765 [2024-12-13 23:54:49.384099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.384107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.384149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.384159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.765 [2024-12-13 23:54:49.384168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.384178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.384227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.765 [2024-12-13 23:54:49.384237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.765 [2024-12-13 23:54:49.384246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.765 [2024-12-13 23:54:49.384254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.765 [2024-12-13 23:54:49.384390] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 492.221 ms, result 0 00:21:20.221 00:21:20.221 00:21:20.221 23:54:50 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:20.221 [2024-12-13 23:54:50.917644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:20.221 [2024-12-13 23:54:50.917783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:21:20.482 [2024-12-13 23:54:51.070274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.743 [2024-12-13 23:54:51.290776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.005 [2024-12-13 23:54:51.577144] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:21.005 [2024-12-13 23:54:51.577212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:21.005 [2024-12-13 23:54:51.732349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.005 [2024-12-13 23:54:51.732403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:21.005 [2024-12-13 23:54:51.732417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:21.005 [2024-12-13 23:54:51.732428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.005 [2024-12-13 23:54:51.732501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.005 [2024-12-13 23:54:51.732512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:21.005 [2024-12-13 23:54:51.732521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:21.005 [2024-12-13 23:54:51.732529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.005 [2024-12-13 23:54:51.732550] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:21.005 [2024-12-13 23:54:51.733328] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:21.005 [2024-12-13 23:54:51.733346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.005 [2024-12-13 23:54:51.733356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:21.005 [2024-12-13 23:54:51.733365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:21:21.005 [2024-12-13 23:54:51.733373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.268 [2024-12-13 23:54:51.735067] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:21.268 [2024-12-13 23:54:51.749787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.268 [2024-12-13 23:54:51.749833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:21.268 [2024-12-13 23:54:51.749847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.721 ms 00:21:21.268 [2024-12-13 23:54:51.749854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.749932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.749943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:21.269 [2024-12-13 23:54:51.749952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:21.269 [2024-12-13 23:54:51.749959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.758165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.758200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.269 [2024-12-13 23:54:51.758211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.123 ms 00:21:21.269 [2024-12-13 23:54:51.758219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.758312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.758322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.269 [2024-12-13 23:54:51.758331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:21.269 [2024-12-13 23:54:51.758339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.758383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.758392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:21.269 [2024-12-13 23:54:51.758401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:21.269 [2024-12-13 23:54:51.758409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.758439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:21.269 [2024-12-13 23:54:51.762602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.762636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.269 [2024-12-13 23:54:51.762646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:21:21.269 [2024-12-13 23:54:51.762654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.762692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.762700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:21.269 [2024-12-13 23:54:51.762709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:21.269 [2024-12-13 23:54:51.762719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.762771] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:21.269 [2024-12-13 23:54:51.762792] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:21.269 [2024-12-13 23:54:51.762828] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:21.269 [2024-12-13 23:54:51.762845] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:21.269 [2024-12-13 23:54:51.762920] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:21.269 [2024-12-13 23:54:51.762930] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:21.269 [2024-12-13 23:54:51.762943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:21.269 [2024-12-13 23:54:51.762953] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:21.269 [2024-12-13 23:54:51.762963] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:21.269 [2024-12-13 23:54:51.762972] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:21.269 [2024-12-13 23:54:51.762979] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:21.269 [2024-12-13 23:54:51.762986] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:21.269 [2024-12-13 23:54:51.762994] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:21.269 [2024-12-13 23:54:51.763002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.763009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:21.269 [2024-12-13 23:54:51.763017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:21:21.269 [2024-12-13 23:54:51.763025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.763089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.269 [2024-12-13 23:54:51.763102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:21.269 [2024-12-13 23:54:51.763110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:21.269 [2024-12-13 23:54:51.763117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.269 [2024-12-13 23:54:51.763190] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:21.269 [2024-12-13 23:54:51.763201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:21.269 [2024-12-13 23:54:51.763210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:21.269 [2024-12-13 23:54:51.763233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:21.269 [2024-12-13 23:54:51.763256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:21.269 [2024-12-13 23:54:51.763270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:21.269 [2024-12-13 23:54:51.763280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:21.269 [2024-12-13 23:54:51.763288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:21.269 [2024-12-13 23:54:51.763295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:21.269 [2024-12-13 23:54:51.763302] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:21.269 [2024-12-13 23:54:51.763308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:21.269 [2024-12-13 23:54:51.763328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:21.269 [2024-12-13 23:54:51.763335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:21.269 [2024-12-13 23:54:51.763348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:21.269 [2024-12-13 23:54:51.763355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:21.269 [2024-12-13 23:54:51.763369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:21.269 [2024-12-13 23:54:51.763389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:21.269 [2024-12-13 23:54:51.763408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:21.269 [2024-12-13 23:54:51.763428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:21.269 [2024-12-13 23:54:51.763446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:21.269 [2024-12-13 23:54:51.763460] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:21.269 [2024-12-13 23:54:51.763466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:21.269 [2024-12-13 23:54:51.763472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:21.269 [2024-12-13 23:54:51.763505] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:21.269 [2024-12-13 23:54:51.763517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:21.269 [2024-12-13 23:54:51.763524] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.269 [2024-12-13 23:54:51.763542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:21.269 [2024-12-13 23:54:51.763549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:21.269 [2024-12-13 23:54:51.763557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:21.269 [2024-12-13 23:54:51.763564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:21.269 [2024-12-13 23:54:51.763571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:21.269 [2024-12-13 23:54:51.763578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:21.269 [2024-12-13 23:54:51.763586] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:21.269 [2024-12-13 23:54:51.763595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:21.269 [2024-12-13 23:54:51.763605] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:21.269 [2024-12-13 23:54:51.763613] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:21.269 [2024-12-13 23:54:51.763620] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:21.269 [2024-12-13 23:54:51.763628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:21.269 [2024-12-13 23:54:51.763635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:21.270 [2024-12-13 23:54:51.763642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:21.270 [2024-12-13 23:54:51.763650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:21.270 [2024-12-13 23:54:51.763657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:21.270 [2024-12-13 23:54:51.763664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:21.270 [2024-12-13 23:54:51.763671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:21.270 [2024-12-13 23:54:51.763678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:21.270 [2024-12-13 23:54:51.763685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:21.270 [2024-12-13 23:54:51.763692] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:21.270 [2024-12-13 23:54:51.763699] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:21.270 [2024-12-13 23:54:51.763708] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:21.270 [2024-12-13 23:54:51.763716] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:21.270 [2024-12-13 23:54:51.763723] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:21.270 [2024-12-13 23:54:51.763729] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:21.270 [2024-12-13 23:54:51.763737] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:21.270 [2024-12-13 23:54:51.763744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.763753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:21.270 [2024-12-13 23:54:51.763760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:21:21.270 [2024-12-13 23:54:51.763782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.781804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.781845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.270 [2024-12-13 23:54:51.781856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.973 ms 00:21:21.270 [2024-12-13 23:54:51.781869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.781962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.781971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:21.270 [2024-12-13 23:54:51.781979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:21.270 [2024-12-13 23:54:51.781988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.828890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.828940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.270 [2024-12-13 23:54:51.828952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.848 ms 00:21:21.270 [2024-12-13 23:54:51.828961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.829012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.829021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.270 [2024-12-13 23:54:51.829030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:21.270 [2024-12-13 23:54:51.829038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.829648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.829681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.270 [2024-12-13 23:54:51.829698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:21:21.270 [2024-12-13 23:54:51.829707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.829834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.829850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.270 [2024-12-13 23:54:51.829859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:21.270 [2024-12-13 23:54:51.829868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.846357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.846400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.270 [2024-12-13 23:54:51.846411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.466 ms 00:21:21.270 [2024-12-13 23:54:51.846419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.860954] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:21.270 [2024-12-13 23:54:51.860998] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:21.270 [2024-12-13 23:54:51.861011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.861020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:21.270 [2024-12-13 23:54:51.861030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.451 ms 00:21:21.270 [2024-12-13 23:54:51.861037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.886618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.886659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:21.270 [2024-12-13 23:54:51.886671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.527 ms 00:21:21.270 [2024-12-13 23:54:51.886679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.899462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.899507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:21.270 [2024-12-13 23:54:51.899519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.721 ms 00:21:21.270 [2024-12-13 23:54:51.899527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.912538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.912580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:21.270 [2024-12-13 23:54:51.912601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.964 ms 00:21:21.270 [2024-12-13 23:54:51.912608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.912994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.913017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:21.270 [2024-12-13 23:54:51.913027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:21:21.270 [2024-12-13 23:54:51.913035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.979418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.979468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:21.270 [2024-12-13 23:54:51.979492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.365 ms 00:21:21.270 [2024-12-13 23:54:51.979502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.991351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:21.270 [2024-12-13 23:54:51.994281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.994320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:21.270 [2024-12-13 23:54:51.994339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.720 ms 00:21:21.270 [2024-12-13 23:54:51.994347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.994420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.994431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:21.270 [2024-12-13 23:54:51.994440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:21.270 [2024-12-13 23:54:51.994449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.270 [2024-12-13 23:54:51.995893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.270 [2024-12-13 23:54:51.995936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:21.270 [2024-12-13 23:54:51.995948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:21:21.270 [2024-12-13 23:54:51.995962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.531 [2024-12-13 23:54:51.997333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.531 [2024-12-13 23:54:51.997373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:21.531 [2024-12-13 23:54:51.997383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:21:21.531 [2024-12-13 23:54:51.997391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.531 [2024-12-13 23:54:51.997428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.531 [2024-12-13 23:54:51.997442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:21.531 [2024-12-13 23:54:51.997450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:21.531 [2024-12-13 23:54:51.997458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.531 [2024-12-13 23:54:51.997513] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:21.531 [2024-12-13 23:54:51.997526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.531 [2024-12-13 23:54:51.997534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:21.531 [2024-12-13 23:54:51.997542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:21.531 [2024-12-13 23:54:51.997549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.531 [2024-12-13 23:54:52.023694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.531 [2024-12-13 23:54:52.023740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:21.531 [2024-12-13 23:54:52.023753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.124 ms 00:21:21.531 [2024-12-13 23:54:52.023793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.532 [2024-12-13 23:54:52.023878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.532 [2024-12-13 23:54:52.023887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:21.532 [2024-12-13 23:54:52.023897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:21.532 [2024-12-13 23:54:52.023906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.532 [2024-12-13 23:54:52.031463] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.935 ms, result 0 00:21:22.918  [2024-12-13T23:54:54.221Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-13T23:54:55.602Z] Copying: 22/1024 [MB] (10 MBps) [2024-12-13T23:54:56.547Z] Copying: 33/1024 [MB] (10 MBps) [2024-12-13T23:54:57.492Z] Copying: 58/1024 [MB] (25 MBps) [2024-12-13T23:54:58.437Z] Copying: 78/1024 [MB] (20 MBps) [2024-12-13T23:54:59.380Z] Copying: 90/1024 [MB] (12 MBps) [2024-12-13T23:55:00.324Z] Copying: 109/1024 [MB] (18 MBps) [2024-12-13T23:55:01.266Z] Copying: 133/1024 [MB] (24 MBps) [2024-12-13T23:55:02.654Z] Copying: 148/1024 [MB] (14 MBps) [2024-12-13T23:55:03.222Z] Copying: 165/1024 [MB] (16 MBps) [2024-12-13T23:55:04.610Z] Copying: 184/1024 [MB] (19 MBps) [2024-12-13T23:55:05.554Z] Copying: 200/1024 [MB] (15 MBps) [2024-12-13T23:55:06.496Z] Copying: 220/1024 [MB] (20 MBps) [2024-12-13T23:55:07.439Z] Copying: 242/1024 [MB] (22 MBps) [2024-12-13T23:55:08.382Z] Copying: 264/1024 [MB] (21 MBps) [2024-12-13T23:55:09.325Z] Copying: 288/1024 [MB] (24 MBps) [2024-12-13T23:55:10.270Z] Copying: 312/1024 [MB] (24 MBps) [2024-12-13T23:55:11.656Z] Copying: 334/1024 [MB] (21 MBps) [2024-12-13T23:55:12.229Z] Copying: 350/1024 [MB] (16 MBps) [2024-12-13T23:55:13.616Z] Copying: 365/1024 [MB] (15 MBps) [2024-12-13T23:55:14.556Z] Copying: 387/1024 [MB] (21 MBps) [2024-12-13T23:55:15.499Z] Copying: 407/1024 [MB] (19 MBps) [2024-12-13T23:55:16.511Z] Copying: 425/1024 [MB] (18 MBps) [2024-12-13T23:55:17.454Z] Copying: 444/1024 [MB] (19 MBps) [2024-12-13T23:55:18.399Z] Copying: 465/1024 [MB] (21 MBps) [2024-12-13T23:55:19.343Z] Copying: 485/1024 [MB] (19 MBps) [2024-12-13T23:55:20.289Z] Copying: 496/1024 [MB] (11 MBps) [2024-12-13T23:55:21.236Z] Copying: 507/1024 [MB] (11 MBps) [2024-12-13T23:55:22.629Z] Copying: 518/1024 [MB] (11 MBps) [2024-12-13T23:55:23.573Z] Copying: 530/1024 [MB] (11 MBps) [2024-12-13T23:55:24.521Z] Copying: 541/1024 [MB] (11 MBps) [2024-12-13T23:55:25.464Z] Copying: 552/1024 [MB] (11 MBps) [2024-12-13T23:55:26.408Z] Copying: 563/1024 [MB] (11 MBps) [2024-12-13T23:55:27.354Z] Copying: 575/1024 [MB] (11 MBps) [2024-12-13T23:55:28.298Z] Copying: 585/1024 [MB] (10 MBps) [2024-12-13T23:55:29.239Z] Copying: 596/1024 [MB] (10 MBps) [2024-12-13T23:55:30.622Z] Copying: 607/1024 [MB] (10 MBps) [2024-12-13T23:55:31.562Z] Copying: 618/1024 [MB] (10 MBps) [2024-12-13T23:55:32.501Z] Copying: 628/1024 [MB] (10 MBps) [2024-12-13T23:55:33.440Z] Copying: 639/1024 [MB] (10 MBps) [2024-12-13T23:55:34.382Z] Copying: 666/1024 [MB] (26 MBps) [2024-12-13T23:55:35.324Z] Copying: 677/1024 [MB] (11 MBps) [2024-12-13T23:55:36.263Z] Copying: 687/1024 [MB] (10 MBps) [2024-12-13T23:55:37.643Z] Copying: 706/1024 [MB] (18 MBps) [2024-12-13T23:55:38.580Z] Copying: 725/1024 [MB] (19 MBps) [2024-12-13T23:55:39.521Z] Copying: 747/1024 [MB] (21 MBps) [2024-12-13T23:55:40.459Z] Copying: 772/1024 [MB] (25 MBps) [2024-12-13T23:55:41.395Z] Copying: 790/1024 [MB] (18 MBps) [2024-12-13T23:55:42.407Z] Copying: 818/1024 [MB] (27 MBps) [2024-12-13T23:55:43.349Z] Copying: 842/1024 [MB] (24 MBps) [2024-12-13T23:55:44.292Z] Copying: 860/1024 [MB] (17 MBps) [2024-12-13T23:55:45.235Z] Copying: 884/1024 [MB] (23 MBps) [2024-12-13T23:55:46.618Z] Copying: 904/1024 [MB] (20 MBps) [2024-12-13T23:55:47.559Z] Copying: 927/1024 [MB] (23 MBps) [2024-12-13T23:55:48.502Z] Copying: 951/1024 [MB] (23 MBps) [2024-12-13T23:55:49.439Z] Copying: 977/1024 [MB] (26 MBps) [2024-12-13T23:55:50.012Z] Copying: 1009/1024 [MB] (31 MBps) [2024-12-13T23:55:50.273Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-13 23:55:50.085346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-12-13 23:55:50.085477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:19.541 [2024-12-13 23:55:50.085528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:19.541 [2024-12-13 23:55:50.085543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-12-13 23:55:50.085583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:19.541 [2024-12-13 23:55:50.090439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-12-13 23:55:50.090504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:19.541 [2024-12-13 23:55:50.090523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:22:19.541 [2024-12-13 23:55:50.090537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-12-13 23:55:50.090975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-12-13 23:55:50.091011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:19.541 [2024-12-13 23:55:50.091027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:22:19.541 [2024-12-13 23:55:50.091041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-12-13 23:55:50.099494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-12-13 23:55:50.099536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:19.541 [2024-12-13 23:55:50.099548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.414 ms 00:22:19.541 [2024-12-13 23:55:50.099556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-12-13 23:55:50.105709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-12-13 23:55:50.105744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:19.541 [2024-12-13 23:55:50.105762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:22:19.542 [2024-12-13 23:55:50.105771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.542 [2024-12-13 23:55:50.132487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-12-13 23:55:50.132529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:19.542 [2024-12-13 23:55:50.132542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.650 ms 00:22:19.542 [2024-12-13 23:55:50.132550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.542 [2024-12-13 23:55:50.148041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-12-13 23:55:50.148096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:19.542 [2024-12-13 23:55:50.148110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.445 ms 00:22:19.542 [2024-12-13 23:55:50.148117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.804 [2024-12-13 23:55:50.347925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.804 [2024-12-13 23:55:50.347970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:19.804 [2024-12-13 23:55:50.347983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 199.754 ms 00:22:19.804 [2024-12-13 23:55:50.347991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.804 [2024-12-13 23:55:50.373644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.804 [2024-12-13 23:55:50.373685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:19.804 [2024-12-13 23:55:50.373697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.628 ms 00:22:19.804 [2024-12-13 23:55:50.373705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.804 [2024-12-13 23:55:50.399025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.804 [2024-12-13 23:55:50.399067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:19.804 [2024-12-13 23:55:50.399079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.276 ms 00:22:19.804 [2024-12-13 23:55:50.399097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.804 [2024-12-13 23:55:50.424605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.804 [2024-12-13 23:55:50.424652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:19.804 [2024-12-13 23:55:50.424665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.463 ms 00:22:19.804 [2024-12-13 23:55:50.424673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.804 [2024-12-13 23:55:50.449461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.804 [2024-12-13 23:55:50.449519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:19.804 [2024-12-13 23:55:50.449532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.700 ms 00:22:19.804 [2024-12-13 23:55:50.449541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.804 [2024-12-13 23:55:50.449583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:19.804 [2024-12-13 23:55:50.449601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:19.804 [2024-12-13 23:55:50.449613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:19.804 [2024-12-13 23:55:50.449687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.449995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:19.805 [2024-12-13 23:55:50.450389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:19.805 [2024-12-13 23:55:50.450398] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ea95622-f05f-4568-a570-323b970f1ac5 00:22:19.806 [2024-12-13 23:55:50.450407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:19.806 [2024-12-13 23:55:50.450416] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35520 00:22:19.806 [2024-12-13 23:55:50.450430] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34560 00:22:19.806 [2024-12-13 23:55:50.450440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0278 00:22:19.806 [2024-12-13 23:55:50.450447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:19.806 [2024-12-13 23:55:50.450456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:19.806 [2024-12-13 23:55:50.450464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:19.806 [2024-12-13 23:55:50.450471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:19.806 [2024-12-13 23:55:50.450496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:19.806 [2024-12-13 23:55:50.450504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.806 [2024-12-13 23:55:50.450512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:19.806 [2024-12-13 23:55:50.450521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:22:19.806 [2024-12-13 23:55:50.450529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.806 [2024-12-13 23:55:50.464096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.806 [2024-12-13 23:55:50.464140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:19.806 [2024-12-13 23:55:50.464151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.534 ms 00:22:19.806 [2024-12-13 23:55:50.464159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.806 [2024-12-13 23:55:50.464382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.806 [2024-12-13 23:55:50.464392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:19.806 [2024-12-13 23:55:50.464400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:22:19.806 [2024-12-13 23:55:50.464408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.806 [2024-12-13 23:55:50.499790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.806 [2024-12-13 23:55:50.499838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.806 [2024-12-13 23:55:50.499847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.806 [2024-12-13 23:55:50.499855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.806 [2024-12-13 23:55:50.499906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.806 [2024-12-13 23:55:50.499914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.806 [2024-12-13 23:55:50.499921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.806 [2024-12-13 23:55:50.499928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.806 [2024-12-13 23:55:50.499990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.806 [2024-12-13 23:55:50.500000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.806 [2024-12-13 23:55:50.500007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.806 [2024-12-13 23:55:50.500014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.806 [2024-12-13 23:55:50.500028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.806 [2024-12-13 23:55:50.500035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.806 [2024-12-13 23:55:50.500042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.806 [2024-12-13 23:55:50.500049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.575335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.575383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.067 [2024-12-13 23:55:50.575394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.575401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.606899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.606940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:20.067 [2024-12-13 23:55:50.606951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.606958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.607032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:20.067 [2024-12-13 23:55:50.607040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.607048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.607098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:20.067 [2024-12-13 23:55:50.607106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.607114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.607218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:20.067 [2024-12-13 23:55:50.607229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.607237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.607274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:20.067 [2024-12-13 23:55:50.607282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.607290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.607337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:20.067 [2024-12-13 23:55:50.607347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.607356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.067 [2024-12-13 23:55:50.607407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:20.067 [2024-12-13 23:55:50.607415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.067 [2024-12-13 23:55:50.607423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.067 [2024-12-13 23:55:50.607575] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.216 ms, result 0 00:22:21.010 00:22:21.010 00:22:21.010 23:55:51 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:23.559 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:23.559 23:55:53 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:23.559 23:55:53 -- ftl/restore.sh@85 -- # restore_kill 00:22:23.559 23:55:53 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:23.559 23:55:53 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:23.559 23:55:53 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:23.559 23:55:53 -- ftl/restore.sh@32 -- # killprocess 73172 00:22:23.559 23:55:53 -- common/autotest_common.sh@936 -- # '[' -z 73172 ']' 00:22:23.559 23:55:53 -- common/autotest_common.sh@940 -- # kill -0 73172 00:22:23.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73172) - No such process 00:22:23.559 Process with pid 73172 is not found 00:22:23.559 Remove shared memory files 00:22:23.559 23:55:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73172 is not found' 00:22:23.559 23:55:53 -- ftl/restore.sh@33 -- # remove_shm 00:22:23.559 23:55:53 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:23.559 23:55:53 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:23.559 23:55:53 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:23.559 23:55:53 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:23.559 23:55:53 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:23.559 23:55:53 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:23.559 00:22:23.559 real 4m33.087s 00:22:23.559 user 4m20.481s 00:22:23.559 sys 0m12.304s 00:22:23.559 23:55:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:23.559 ************************************ 00:22:23.559 END TEST ftl_restore 00:22:23.559 ************************************ 00:22:23.559 23:55:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 23:55:53 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:23.559 23:55:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:23.559 23:55:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:23.559 23:55:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 ************************************ 00:22:23.559 START TEST ftl_dirty_shutdown 00:22:23.559 ************************************ 00:22:23.559 23:55:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:23.559 * Looking for test storage... 00:22:23.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.559 23:55:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:23.559 23:55:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:23.559 23:55:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:23.559 23:55:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:23.559 23:55:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:23.559 23:55:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:23.559 23:55:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:23.559 23:55:54 -- scripts/common.sh@335 -- # IFS=.-: 00:22:23.559 23:55:54 -- scripts/common.sh@335 -- # read -ra ver1 00:22:23.559 23:55:54 -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.559 23:55:54 -- scripts/common.sh@336 -- # read -ra ver2 00:22:23.559 23:55:54 -- scripts/common.sh@337 -- # local 'op=<' 00:22:23.559 23:55:54 -- scripts/common.sh@339 -- # ver1_l=2 00:22:23.559 23:55:54 -- scripts/common.sh@340 -- # ver2_l=1 00:22:23.559 23:55:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:23.559 23:55:54 -- scripts/common.sh@343 -- # case "$op" in 00:22:23.559 23:55:54 -- scripts/common.sh@344 -- # : 1 00:22:23.559 23:55:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:23.559 23:55:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.559 23:55:54 -- scripts/common.sh@364 -- # decimal 1 00:22:23.559 23:55:54 -- scripts/common.sh@352 -- # local d=1 00:22:23.559 23:55:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.559 23:55:54 -- scripts/common.sh@354 -- # echo 1 00:22:23.559 23:55:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:23.559 23:55:54 -- scripts/common.sh@365 -- # decimal 2 00:22:23.559 23:55:54 -- scripts/common.sh@352 -- # local d=2 00:22:23.559 23:55:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.559 23:55:54 -- scripts/common.sh@354 -- # echo 2 00:22:23.559 23:55:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:23.559 23:55:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:23.559 23:55:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:23.559 23:55:54 -- scripts/common.sh@367 -- # return 0 00:22:23.559 23:55:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.559 23:55:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:23.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.559 --rc genhtml_branch_coverage=1 00:22:23.559 --rc genhtml_function_coverage=1 00:22:23.559 --rc genhtml_legend=1 00:22:23.559 --rc geninfo_all_blocks=1 00:22:23.559 --rc geninfo_unexecuted_blocks=1 00:22:23.559 00:22:23.559 ' 00:22:23.559 23:55:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:23.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.559 --rc genhtml_branch_coverage=1 00:22:23.559 --rc genhtml_function_coverage=1 00:22:23.559 --rc genhtml_legend=1 00:22:23.559 --rc geninfo_all_blocks=1 00:22:23.559 --rc geninfo_unexecuted_blocks=1 00:22:23.559 00:22:23.559 ' 00:22:23.559 23:55:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:23.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.559 --rc genhtml_branch_coverage=1 00:22:23.559 --rc genhtml_function_coverage=1 00:22:23.559 --rc genhtml_legend=1 00:22:23.559 --rc geninfo_all_blocks=1 00:22:23.559 --rc geninfo_unexecuted_blocks=1 00:22:23.559 00:22:23.559 ' 00:22:23.559 23:55:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:23.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.559 --rc genhtml_branch_coverage=1 00:22:23.559 --rc genhtml_function_coverage=1 00:22:23.559 --rc genhtml_legend=1 00:22:23.559 --rc geninfo_all_blocks=1 00:22:23.559 --rc geninfo_unexecuted_blocks=1 00:22:23.559 00:22:23.559 ' 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:23.559 23:55:54 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:23.559 23:55:54 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.559 23:55:54 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.559 23:55:54 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:23.559 23:55:54 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:23.559 23:55:54 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.559 23:55:54 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:23.559 23:55:54 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:23.559 23:55:54 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.559 23:55:54 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.559 23:55:54 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:23.559 23:55:54 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:23.559 23:55:54 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.559 23:55:54 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.559 23:55:54 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:23.559 23:55:54 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:23.559 23:55:54 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.559 23:55:54 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.559 23:55:54 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:23.559 23:55:54 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:23.559 23:55:54 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.559 23:55:54 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.559 23:55:54 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.559 23:55:54 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.559 23:55:54 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:23.559 23:55:54 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:23.559 23:55:54 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.559 23:55:54 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76126 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76126 00:22:23.559 23:55:54 -- common/autotest_common.sh@829 -- # '[' -z 76126 ']' 00:22:23.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.559 23:55:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.559 23:55:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.559 23:55:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.559 23:55:54 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:23.559 23:55:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.559 23:55:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.560 [2024-12-13 23:55:54.131356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:23.560 [2024-12-13 23:55:54.131452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76126 ] 00:22:23.560 [2024-12-13 23:55:54.274420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.820 [2024-12-13 23:55:54.488943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:23.820 [2024-12-13 23:55:54.489160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.208 23:55:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.208 23:55:55 -- common/autotest_common.sh@862 -- # return 0 00:22:25.208 23:55:55 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:25.208 23:55:55 -- ftl/common.sh@54 -- # local name=nvme0 00:22:25.208 23:55:55 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:25.208 23:55:55 -- ftl/common.sh@56 -- # local size=103424 00:22:25.208 23:55:55 -- ftl/common.sh@59 -- # local base_bdev 00:22:25.208 23:55:55 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:25.208 23:55:55 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:25.208 23:55:55 -- ftl/common.sh@62 -- # local base_size 00:22:25.208 23:55:55 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:25.208 23:55:55 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:25.208 23:55:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:25.208 23:55:55 -- common/autotest_common.sh@1369 -- # local bs 00:22:25.208 23:55:55 -- common/autotest_common.sh@1370 -- # local nb 00:22:25.208 23:55:55 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:25.470 23:55:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:25.470 { 00:22:25.470 "name": "nvme0n1", 00:22:25.470 "aliases": [ 00:22:25.470 "e95bcc34-896f-4116-9130-f46527613415" 00:22:25.470 ], 00:22:25.470 "product_name": "NVMe disk", 00:22:25.470 "block_size": 4096, 00:22:25.470 "num_blocks": 1310720, 00:22:25.470 "uuid": "e95bcc34-896f-4116-9130-f46527613415", 00:22:25.470 "assigned_rate_limits": { 00:22:25.470 "rw_ios_per_sec": 0, 00:22:25.470 "rw_mbytes_per_sec": 0, 00:22:25.470 "r_mbytes_per_sec": 0, 00:22:25.470 "w_mbytes_per_sec": 0 00:22:25.470 }, 00:22:25.470 "claimed": true, 00:22:25.470 "claim_type": "read_many_write_one", 00:22:25.470 "zoned": false, 00:22:25.470 "supported_io_types": { 00:22:25.470 "read": true, 00:22:25.470 "write": true, 00:22:25.470 "unmap": true, 00:22:25.470 "write_zeroes": true, 00:22:25.470 "flush": true, 00:22:25.470 "reset": true, 00:22:25.470 "compare": true, 00:22:25.470 "compare_and_write": false, 00:22:25.470 "abort": true, 00:22:25.470 "nvme_admin": true, 00:22:25.470 "nvme_io": true 00:22:25.470 }, 00:22:25.470 "driver_specific": { 00:22:25.470 "nvme": [ 00:22:25.470 { 00:22:25.470 "pci_address": "0000:00:07.0", 00:22:25.470 "trid": { 00:22:25.470 "trtype": "PCIe", 00:22:25.470 "traddr": "0000:00:07.0" 00:22:25.470 }, 00:22:25.470 "ctrlr_data": { 00:22:25.470 "cntlid": 0, 00:22:25.470 "vendor_id": "0x1b36", 00:22:25.470 "model_number": "QEMU NVMe Ctrl", 00:22:25.470 "serial_number": "12341", 00:22:25.470 "firmware_revision": "8.0.0", 00:22:25.470 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:25.470 "oacs": { 00:22:25.470 "security": 0, 00:22:25.470 "format": 1, 00:22:25.470 "firmware": 0, 00:22:25.470 "ns_manage": 1 00:22:25.470 }, 00:22:25.470 "multi_ctrlr": false, 00:22:25.470 "ana_reporting": false 00:22:25.470 }, 00:22:25.470 "vs": { 00:22:25.470 "nvme_version": "1.4" 00:22:25.470 }, 00:22:25.470 "ns_data": { 00:22:25.470 "id": 1, 00:22:25.470 "can_share": false 00:22:25.470 } 00:22:25.470 } 00:22:25.470 ], 00:22:25.470 "mp_policy": "active_passive" 00:22:25.470 } 00:22:25.470 } 00:22:25.470 ]' 00:22:25.470 23:55:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:25.470 23:55:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:25.470 23:55:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:25.470 23:55:56 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:25.470 23:55:56 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:25.470 23:55:56 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:25.470 23:55:56 -- ftl/common.sh@63 -- # base_size=5120 00:22:25.470 23:55:56 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:25.470 23:55:56 -- ftl/common.sh@67 -- # clear_lvols 00:22:25.470 23:55:56 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:25.470 23:55:56 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:25.731 23:55:56 -- ftl/common.sh@28 -- # stores=bfa15716-9047-4df5-b9f9-851077ac5080 00:22:25.731 23:55:56 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:25.731 23:55:56 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfa15716-9047-4df5-b9f9-851077ac5080 00:22:25.993 23:55:56 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:26.254 23:55:56 -- ftl/common.sh@68 -- # lvs=69ac1b3b-5ea5-4573-915e-d4e0ff63eaac 00:22:26.254 23:55:56 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 69ac1b3b-5ea5-4573-915e-d4e0ff63eaac 00:22:26.254 23:55:56 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.254 23:55:56 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:26.254 23:55:56 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.254 23:55:56 -- ftl/common.sh@35 -- # local name=nvc0 00:22:26.254 23:55:56 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:26.254 23:55:56 -- ftl/common.sh@37 -- # local base_bdev=cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.254 23:55:56 -- ftl/common.sh@38 -- # local cache_size= 00:22:26.254 23:55:56 -- ftl/common.sh@41 -- # get_bdev_size cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.254 23:55:56 -- common/autotest_common.sh@1367 -- # local bdev_name=cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.254 23:55:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:26.254 23:55:56 -- common/autotest_common.sh@1369 -- # local bs 00:22:26.254 23:55:56 -- common/autotest_common.sh@1370 -- # local nb 00:22:26.254 23:55:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.514 23:55:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:26.514 { 00:22:26.514 "name": "cca559c4-4dc1-4ec6-933d-cfd18fcc090e", 00:22:26.514 "aliases": [ 00:22:26.514 "lvs/nvme0n1p0" 00:22:26.514 ], 00:22:26.514 "product_name": "Logical Volume", 00:22:26.514 "block_size": 4096, 00:22:26.514 "num_blocks": 26476544, 00:22:26.514 "uuid": "cca559c4-4dc1-4ec6-933d-cfd18fcc090e", 00:22:26.514 "assigned_rate_limits": { 00:22:26.514 "rw_ios_per_sec": 0, 00:22:26.514 "rw_mbytes_per_sec": 0, 00:22:26.514 "r_mbytes_per_sec": 0, 00:22:26.514 "w_mbytes_per_sec": 0 00:22:26.514 }, 00:22:26.514 "claimed": false, 00:22:26.514 "zoned": false, 00:22:26.514 "supported_io_types": { 00:22:26.514 "read": true, 00:22:26.514 "write": true, 00:22:26.514 "unmap": true, 00:22:26.514 "write_zeroes": true, 00:22:26.514 "flush": false, 00:22:26.514 "reset": true, 00:22:26.514 "compare": false, 00:22:26.514 "compare_and_write": false, 00:22:26.514 "abort": false, 00:22:26.514 "nvme_admin": false, 00:22:26.514 "nvme_io": false 00:22:26.514 }, 00:22:26.514 "driver_specific": { 00:22:26.514 "lvol": { 00:22:26.514 "lvol_store_uuid": "69ac1b3b-5ea5-4573-915e-d4e0ff63eaac", 00:22:26.514 "base_bdev": "nvme0n1", 00:22:26.514 "thin_provision": true, 00:22:26.514 "snapshot": false, 00:22:26.514 "clone": false, 00:22:26.514 "esnap_clone": false 00:22:26.514 } 00:22:26.514 } 00:22:26.514 } 00:22:26.514 ]' 00:22:26.514 23:55:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:26.514 23:55:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:26.514 23:55:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:26.514 23:55:57 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:26.514 23:55:57 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:26.514 23:55:57 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:26.514 23:55:57 -- ftl/common.sh@41 -- # local base_size=5171 00:22:26.514 23:55:57 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:26.514 23:55:57 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:26.772 23:55:57 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:26.772 23:55:57 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:26.772 23:55:57 -- ftl/common.sh@48 -- # get_bdev_size cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.772 23:55:57 -- common/autotest_common.sh@1367 -- # local bdev_name=cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:26.772 23:55:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:26.772 23:55:57 -- common/autotest_common.sh@1369 -- # local bs 00:22:26.772 23:55:57 -- common/autotest_common.sh@1370 -- # local nb 00:22:26.772 23:55:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:27.030 23:55:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:27.030 { 00:22:27.030 "name": "cca559c4-4dc1-4ec6-933d-cfd18fcc090e", 00:22:27.030 "aliases": [ 00:22:27.030 "lvs/nvme0n1p0" 00:22:27.030 ], 00:22:27.030 "product_name": "Logical Volume", 00:22:27.030 "block_size": 4096, 00:22:27.030 "num_blocks": 26476544, 00:22:27.030 "uuid": "cca559c4-4dc1-4ec6-933d-cfd18fcc090e", 00:22:27.030 "assigned_rate_limits": { 00:22:27.031 "rw_ios_per_sec": 0, 00:22:27.031 "rw_mbytes_per_sec": 0, 00:22:27.031 "r_mbytes_per_sec": 0, 00:22:27.031 "w_mbytes_per_sec": 0 00:22:27.031 }, 00:22:27.031 "claimed": false, 00:22:27.031 "zoned": false, 00:22:27.031 "supported_io_types": { 00:22:27.031 "read": true, 00:22:27.031 "write": true, 00:22:27.031 "unmap": true, 00:22:27.031 "write_zeroes": true, 00:22:27.031 "flush": false, 00:22:27.031 "reset": true, 00:22:27.031 "compare": false, 00:22:27.031 "compare_and_write": false, 00:22:27.031 "abort": false, 00:22:27.031 "nvme_admin": false, 00:22:27.031 "nvme_io": false 00:22:27.031 }, 00:22:27.031 "driver_specific": { 00:22:27.031 "lvol": { 00:22:27.031 "lvol_store_uuid": "69ac1b3b-5ea5-4573-915e-d4e0ff63eaac", 00:22:27.031 "base_bdev": "nvme0n1", 00:22:27.031 "thin_provision": true, 00:22:27.031 "snapshot": false, 00:22:27.031 "clone": false, 00:22:27.031 "esnap_clone": false 00:22:27.031 } 00:22:27.031 } 00:22:27.031 } 00:22:27.031 ]' 00:22:27.031 23:55:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:27.031 23:55:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:27.031 23:55:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:27.031 23:55:57 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:27.031 23:55:57 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:27.031 23:55:57 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:27.031 23:55:57 -- ftl/common.sh@48 -- # cache_size=5171 00:22:27.031 23:55:57 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:27.289 23:55:57 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:27.289 23:55:57 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:27.289 23:55:57 -- common/autotest_common.sh@1367 -- # local bdev_name=cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:27.289 23:55:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:27.289 23:55:57 -- common/autotest_common.sh@1369 -- # local bs 00:22:27.289 23:55:57 -- common/autotest_common.sh@1370 -- # local nb 00:22:27.289 23:55:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cca559c4-4dc1-4ec6-933d-cfd18fcc090e 00:22:27.550 23:55:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:27.550 { 00:22:27.550 "name": "cca559c4-4dc1-4ec6-933d-cfd18fcc090e", 00:22:27.550 "aliases": [ 00:22:27.550 "lvs/nvme0n1p0" 00:22:27.550 ], 00:22:27.550 "product_name": "Logical Volume", 00:22:27.550 "block_size": 4096, 00:22:27.550 "num_blocks": 26476544, 00:22:27.550 "uuid": "cca559c4-4dc1-4ec6-933d-cfd18fcc090e", 00:22:27.550 "assigned_rate_limits": { 00:22:27.550 "rw_ios_per_sec": 0, 00:22:27.550 "rw_mbytes_per_sec": 0, 00:22:27.550 "r_mbytes_per_sec": 0, 00:22:27.550 "w_mbytes_per_sec": 0 00:22:27.550 }, 00:22:27.550 "claimed": false, 00:22:27.550 "zoned": false, 00:22:27.550 "supported_io_types": { 00:22:27.550 "read": true, 00:22:27.550 "write": true, 00:22:27.550 "unmap": true, 00:22:27.550 "write_zeroes": true, 00:22:27.550 "flush": false, 00:22:27.550 "reset": true, 00:22:27.550 "compare": false, 00:22:27.550 "compare_and_write": false, 00:22:27.550 "abort": false, 00:22:27.550 "nvme_admin": false, 00:22:27.550 "nvme_io": false 00:22:27.550 }, 00:22:27.550 "driver_specific": { 00:22:27.550 "lvol": { 00:22:27.550 "lvol_store_uuid": "69ac1b3b-5ea5-4573-915e-d4e0ff63eaac", 00:22:27.550 "base_bdev": "nvme0n1", 00:22:27.550 "thin_provision": true, 00:22:27.550 "snapshot": false, 00:22:27.550 "clone": false, 00:22:27.550 "esnap_clone": false 00:22:27.550 } 00:22:27.550 } 00:22:27.550 } 00:22:27.550 ]' 00:22:27.550 23:55:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:27.550 23:55:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:27.550 23:55:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:27.550 23:55:58 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:27.550 23:55:58 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:27.550 23:55:58 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:27.550 23:55:58 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:27.550 23:55:58 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cca559c4-4dc1-4ec6-933d-cfd18fcc090e --l2p_dram_limit 10' 00:22:27.550 23:55:58 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:27.550 23:55:58 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:27.550 23:55:58 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:27.550 23:55:58 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cca559c4-4dc1-4ec6-933d-cfd18fcc090e --l2p_dram_limit 10 -c nvc0n1p0 00:22:27.813 [2024-12-13 23:55:58.311420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.311509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:27.813 [2024-12-13 23:55:58.311531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:27.813 [2024-12-13 23:55:58.311544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.311629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.311641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.813 [2024-12-13 23:55:58.311651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:27.813 [2024-12-13 23:55:58.311661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.311687] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:27.813 [2024-12-13 23:55:58.312731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:27.813 [2024-12-13 23:55:58.312765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.312775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.813 [2024-12-13 23:55:58.312788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:22:27.813 [2024-12-13 23:55:58.312796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.312920] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5f2f7aca-291e-475a-a6a4-424021edf1a7 00:22:27.813 [2024-12-13 23:55:58.315289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.315350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:27.813 [2024-12-13 23:55:58.315364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:27.813 [2024-12-13 23:55:58.315376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.328384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.328439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.813 [2024-12-13 23:55:58.328452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.935 ms 00:22:27.813 [2024-12-13 23:55:58.328463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.328602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.328617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.813 [2024-12-13 23:55:58.328629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:27.813 [2024-12-13 23:55:58.328647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.328710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.328726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:27.813 [2024-12-13 23:55:58.328737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:27.813 [2024-12-13 23:55:58.328748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.328778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:27.813 [2024-12-13 23:55:58.333973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.334021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.813 [2024-12-13 23:55:58.334035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.201 ms 00:22:27.813 [2024-12-13 23:55:58.334044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.334096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.334105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:27.813 [2024-12-13 23:55:58.334116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:27.813 [2024-12-13 23:55:58.334124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.334166] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:27.813 [2024-12-13 23:55:58.334300] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:27.813 [2024-12-13 23:55:58.334322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:27.813 [2024-12-13 23:55:58.334335] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:27.813 [2024-12-13 23:55:58.334349] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334358] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334372] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:27.813 [2024-12-13 23:55:58.334393] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:27.813 [2024-12-13 23:55:58.334405] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:27.813 [2024-12-13 23:55:58.334413] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:27.813 [2024-12-13 23:55:58.334424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.334435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:27.813 [2024-12-13 23:55:58.334446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:22:27.813 [2024-12-13 23:55:58.334455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.334555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.813 [2024-12-13 23:55:58.334567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:27.813 [2024-12-13 23:55:58.334578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:27.813 [2024-12-13 23:55:58.334590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.813 [2024-12-13 23:55:58.334689] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:27.813 [2024-12-13 23:55:58.334702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:27.813 [2024-12-13 23:55:58.334715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:27.813 [2024-12-13 23:55:58.334743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334760] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:27.813 [2024-12-13 23:55:58.334769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.813 [2024-12-13 23:55:58.334787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:27.813 [2024-12-13 23:55:58.334797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:27.813 [2024-12-13 23:55:58.334809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.813 [2024-12-13 23:55:58.334816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:27.813 [2024-12-13 23:55:58.334826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:27.813 [2024-12-13 23:55:58.334833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:27.813 [2024-12-13 23:55:58.334859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:27.813 [2024-12-13 23:55:58.334868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:27.813 [2024-12-13 23:55:58.334886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:27.813 [2024-12-13 23:55:58.334893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:27.813 [2024-12-13 23:55:58.334910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:27.813 [2024-12-13 23:55:58.334938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:27.813 [2024-12-13 23:55:58.334961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:27.813 [2024-12-13 23:55:58.334970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:27.813 [2024-12-13 23:55:58.334977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:27.813 [2024-12-13 23:55:58.334988] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:27.814 [2024-12-13 23:55:58.334997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:27.814 [2024-12-13 23:55:58.335006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:27.814 [2024-12-13 23:55:58.335012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:27.814 [2024-12-13 23:55:58.335020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.814 [2024-12-13 23:55:58.335027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:27.814 [2024-12-13 23:55:58.335037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:27.814 [2024-12-13 23:55:58.335043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.814 [2024-12-13 23:55:58.335051] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:27.814 [2024-12-13 23:55:58.335061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:27.814 [2024-12-13 23:55:58.335071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.814 [2024-12-13 23:55:58.335078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.814 [2024-12-13 23:55:58.335091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:27.814 [2024-12-13 23:55:58.335097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:27.814 [2024-12-13 23:55:58.335106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:27.814 [2024-12-13 23:55:58.335112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:27.814 [2024-12-13 23:55:58.335123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:27.814 [2024-12-13 23:55:58.335131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:27.814 [2024-12-13 23:55:58.335142] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:27.814 [2024-12-13 23:55:58.335157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.814 [2024-12-13 23:55:58.335168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:27.814 [2024-12-13 23:55:58.335175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:27.814 [2024-12-13 23:55:58.335185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:27.814 [2024-12-13 23:55:58.335195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:27.814 [2024-12-13 23:55:58.335205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:27.814 [2024-12-13 23:55:58.335212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:27.814 [2024-12-13 23:55:58.335222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:27.814 [2024-12-13 23:55:58.335229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:27.814 [2024-12-13 23:55:58.335238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:27.814 [2024-12-13 23:55:58.335245] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:27.814 [2024-12-13 23:55:58.335257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:27.814 [2024-12-13 23:55:58.335265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:27.814 [2024-12-13 23:55:58.335279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:27.814 [2024-12-13 23:55:58.335286] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:27.814 [2024-12-13 23:55:58.335298] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.814 [2024-12-13 23:55:58.335307] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:27.814 [2024-12-13 23:55:58.335316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:27.814 [2024-12-13 23:55:58.335324] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:27.814 [2024-12-13 23:55:58.335335] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:27.814 [2024-12-13 23:55:58.335342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.335353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:27.814 [2024-12-13 23:55:58.335361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:22:27.814 [2024-12-13 23:55:58.335371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.357877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.358121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.814 [2024-12-13 23:55:58.358143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.459 ms 00:22:27.814 [2024-12-13 23:55:58.358155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.358266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.358281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:27.814 [2024-12-13 23:55:58.358295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:27.814 [2024-12-13 23:55:58.358305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.398611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.398665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.814 [2024-12-13 23:55:58.398679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.249 ms 00:22:27.814 [2024-12-13 23:55:58.398690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.398734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.398745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.814 [2024-12-13 23:55:58.398754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:27.814 [2024-12-13 23:55:58.398767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.399535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.399582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.814 [2024-12-13 23:55:58.399596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:22:27.814 [2024-12-13 23:55:58.399608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.399763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.399781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.814 [2024-12-13 23:55:58.399792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:27.814 [2024-12-13 23:55:58.399804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.422033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.422084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.814 [2024-12-13 23:55:58.422096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.173 ms 00:22:27.814 [2024-12-13 23:55:58.422107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.814 [2024-12-13 23:55:58.438567] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:27.814 [2024-12-13 23:55:58.443721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.814 [2024-12-13 23:55:58.443766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:27.814 [2024-12-13 23:55:58.443782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.508 ms 00:22:27.814 [2024-12-13 23:55:58.443790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.076 [2024-12-13 23:55:58.549847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.076 [2024-12-13 23:55:58.549905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:28.076 [2024-12-13 23:55:58.549924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.998 ms 00:22:28.076 [2024-12-13 23:55:58.549933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.076 [2024-12-13 23:55:58.549999] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:28.076 [2024-12-13 23:55:58.550012] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:32.286 [2024-12-13 23:56:02.506279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.286 [2024-12-13 23:56:02.506595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:32.286 [2024-12-13 23:56:02.506632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3956.261 ms 00:22:32.286 [2024-12-13 23:56:02.506643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.286 [2024-12-13 23:56:02.506885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.286 [2024-12-13 23:56:02.506900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:32.286 [2024-12-13 23:56:02.506918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:22:32.286 [2024-12-13 23:56:02.506929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.286 [2024-12-13 23:56:02.534865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.286 [2024-12-13 23:56:02.534924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:32.286 [2024-12-13 23:56:02.534942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.870 ms 00:22:32.286 [2024-12-13 23:56:02.534953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.286 [2024-12-13 23:56:02.561537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.286 [2024-12-13 23:56:02.561755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:32.286 [2024-12-13 23:56:02.561789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.521 ms 00:22:32.286 [2024-12-13 23:56:02.561798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.286 [2024-12-13 23:56:02.562382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.286 [2024-12-13 23:56:02.562406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:32.287 [2024-12-13 23:56:02.562421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:22:32.287 [2024-12-13 23:56:02.562429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.640590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.287 [2024-12-13 23:56:02.640804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:32.287 [2024-12-13 23:56:02.640835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.070 ms 00:22:32.287 [2024-12-13 23:56:02.640845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.670568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.287 [2024-12-13 23:56:02.670623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:32.287 [2024-12-13 23:56:02.670640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.628 ms 00:22:32.287 [2024-12-13 23:56:02.670648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.672350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.287 [2024-12-13 23:56:02.672572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:32.287 [2024-12-13 23:56:02.672602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:22:32.287 [2024-12-13 23:56:02.672611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.700328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.287 [2024-12-13 23:56:02.700550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:32.287 [2024-12-13 23:56:02.700579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.644 ms 00:22:32.287 [2024-12-13 23:56:02.700588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.700673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.287 [2024-12-13 23:56:02.700684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:32.287 [2024-12-13 23:56:02.700697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:32.287 [2024-12-13 23:56:02.700706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.700829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.287 [2024-12-13 23:56:02.700842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:32.287 [2024-12-13 23:56:02.700855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:32.287 [2024-12-13 23:56:02.700864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.287 [2024-12-13 23:56:02.702279] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4390.288 ms, result 0 00:22:32.287 { 00:22:32.287 "name": "ftl0", 00:22:32.287 "uuid": "5f2f7aca-291e-475a-a6a4-424021edf1a7" 00:22:32.287 } 00:22:32.287 23:56:02 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:32.287 23:56:02 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:32.287 23:56:02 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:32.287 23:56:02 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:32.287 23:56:02 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:32.548 /dev/nbd0 00:22:32.548 23:56:03 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:32.548 23:56:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:32.548 23:56:03 -- common/autotest_common.sh@867 -- # local i 00:22:32.548 23:56:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:32.548 23:56:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:32.548 23:56:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:32.548 23:56:03 -- common/autotest_common.sh@871 -- # break 00:22:32.548 23:56:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:32.548 23:56:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:32.548 23:56:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:32.548 1+0 records in 00:22:32.548 1+0 records out 00:22:32.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342222 s, 12.0 MB/s 00:22:32.548 23:56:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:32.548 23:56:03 -- common/autotest_common.sh@884 -- # size=4096 00:22:32.548 23:56:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:32.548 23:56:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:32.548 23:56:03 -- common/autotest_common.sh@887 -- # return 0 00:22:32.548 23:56:03 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:32.548 [2024-12-13 23:56:03.170249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:32.548 [2024-12-13 23:56:03.170332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76283 ] 00:22:32.809 [2024-12-13 23:56:03.313415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.809 [2024-12-13 23:56:03.489053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.195  [2024-12-13T23:56:05.869Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-13T23:56:06.810Z] Copying: 391/1024 [MB] (195 MBps) [2024-12-13T23:56:07.745Z] Copying: 601/1024 [MB] (210 MBps) [2024-12-13T23:56:08.731Z] Copying: 850/1024 [MB] (249 MBps) [2024-12-13T23:56:09.301Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:22:38.569 00:22:38.569 23:56:09 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:40.481 23:56:11 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:40.481 [2024-12-13 23:56:11.065176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:40.481 [2024-12-13 23:56:11.065254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76371 ] 00:22:40.481 [2024-12-13 23:56:11.206380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.739 [2024-12-13 23:56:11.344148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.111  [2024-12-13T23:56:13.776Z] Copying: 34/1024 [MB] (34 MBps) [2024-12-13T23:56:14.710Z] Copying: 62/1024 [MB] (28 MBps) [2024-12-13T23:56:15.642Z] Copying: 94/1024 [MB] (31 MBps) [2024-12-13T23:56:16.575Z] Copying: 129/1024 [MB] (35 MBps) [2024-12-13T23:56:17.948Z] Copying: 165/1024 [MB] (35 MBps) [2024-12-13T23:56:18.881Z] Copying: 199/1024 [MB] (33 MBps) [2024-12-13T23:56:19.813Z] Copying: 232/1024 [MB] (33 MBps) [2024-12-13T23:56:20.753Z] Copying: 268/1024 [MB] (35 MBps) [2024-12-13T23:56:21.696Z] Copying: 294/1024 [MB] (25 MBps) [2024-12-13T23:56:22.630Z] Copying: 311384/1048576 [kB] (10152 kBps) [2024-12-13T23:56:23.562Z] Copying: 333/1024 [MB] (29 MBps) [2024-12-13T23:56:24.935Z] Copying: 367/1024 [MB] (33 MBps) [2024-12-13T23:56:25.869Z] Copying: 401/1024 [MB] (34 MBps) [2024-12-13T23:56:26.803Z] Copying: 437/1024 [MB] (36 MBps) [2024-12-13T23:56:27.737Z] Copying: 473/1024 [MB] (35 MBps) [2024-12-13T23:56:28.671Z] Copying: 509/1024 [MB] (35 MBps) [2024-12-13T23:56:29.604Z] Copying: 545/1024 [MB] (36 MBps) [2024-12-13T23:56:30.537Z] Copying: 581/1024 [MB] (36 MBps) [2024-12-13T23:56:31.910Z] Copying: 616/1024 [MB] (35 MBps) [2024-12-13T23:56:32.843Z] Copying: 652/1024 [MB] (35 MBps) [2024-12-13T23:56:33.777Z] Copying: 688/1024 [MB] (35 MBps) [2024-12-13T23:56:34.710Z] Copying: 713/1024 [MB] (24 MBps) [2024-12-13T23:56:35.670Z] Copying: 748/1024 [MB] (34 MBps) [2024-12-13T23:56:36.603Z] Copying: 781/1024 [MB] (33 MBps) [2024-12-13T23:56:37.535Z] Copying: 815/1024 [MB] (34 MBps) [2024-12-13T23:56:38.909Z] Copying: 845/1024 [MB] (30 MBps) [2024-12-13T23:56:39.842Z] Copying: 879/1024 [MB] (34 MBps) [2024-12-13T23:56:40.776Z] Copying: 915/1024 [MB] (35 MBps) [2024-12-13T23:56:41.711Z] Copying: 950/1024 [MB] (35 MBps) [2024-12-13T23:56:42.646Z] Copying: 986/1024 [MB] (35 MBps) [2024-12-13T23:56:42.646Z] Copying: 1022/1024 [MB] (35 MBps) [2024-12-13T23:56:43.214Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:23:12.482 00:23:12.482 23:56:43 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:12.482 23:56:43 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:12.742 23:56:43 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:13.005 [2024-12-13 23:56:43.559336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.559386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:13.005 [2024-12-13 23:56:43.559400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:13.005 [2024-12-13 23:56:43.559408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.559427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:13.005 [2024-12-13 23:56:43.561675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.561703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:13.005 [2024-12-13 23:56:43.561713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:23:13.005 [2024-12-13 23:56:43.561720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.564261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.564289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:13.005 [2024-12-13 23:56:43.564304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:23:13.005 [2024-12-13 23:56:43.564310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.578769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.578909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:13.005 [2024-12-13 23:56:43.578928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.442 ms 00:23:13.005 [2024-12-13 23:56:43.578934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.583574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.583598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:13.005 [2024-12-13 23:56:43.583608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:23:13.005 [2024-12-13 23:56:43.583617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.603262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.603296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:13.005 [2024-12-13 23:56:43.603306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.586 ms 00:23:13.005 [2024-12-13 23:56:43.603312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.617298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.617326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:13.005 [2024-12-13 23:56:43.617337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.953 ms 00:23:13.005 [2024-12-13 23:56:43.617344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.617460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.617469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:13.005 [2024-12-13 23:56:43.617478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:13.005 [2024-12-13 23:56:43.617496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.636145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.636169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:13.005 [2024-12-13 23:56:43.636178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.631 ms 00:23:13.005 [2024-12-13 23:56:43.636184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.654548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.654572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:13.005 [2024-12-13 23:56:43.654582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.332 ms 00:23:13.005 [2024-12-13 23:56:43.654588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.672781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.672806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:13.005 [2024-12-13 23:56:43.672816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.162 ms 00:23:13.005 [2024-12-13 23:56:43.672821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.690766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.005 [2024-12-13 23:56:43.690790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:13.005 [2024-12-13 23:56:43.690799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:23:13.005 [2024-12-13 23:56:43.690805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.005 [2024-12-13 23:56:43.690836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:13.005 [2024-12-13 23:56:43.690848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.690997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:13.005 [2024-12-13 23:56:43.691002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:13.006 [2024-12-13 23:56:43.691562] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:13.006 [2024-12-13 23:56:43.691570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f2f7aca-291e-475a-a6a4-424021edf1a7 00:23:13.006 [2024-12-13 23:56:43.691578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:13.006 [2024-12-13 23:56:43.691585] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:13.006 [2024-12-13 23:56:43.691594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:13.006 [2024-12-13 23:56:43.691602] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:13.006 [2024-12-13 23:56:43.691608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:13.006 [2024-12-13 23:56:43.691616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:13.006 [2024-12-13 23:56:43.691622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:13.006 [2024-12-13 23:56:43.691628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:13.006 [2024-12-13 23:56:43.691633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:13.006 [2024-12-13 23:56:43.691641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.006 [2024-12-13 23:56:43.691647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:13.006 [2024-12-13 23:56:43.691655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:23:13.006 [2024-12-13 23:56:43.691660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.006 [2024-12-13 23:56:43.702168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.006 [2024-12-13 23:56:43.702192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:13.006 [2024-12-13 23:56:43.702201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.480 ms 00:23:13.007 [2024-12-13 23:56:43.702207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.007 [2024-12-13 23:56:43.702366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.007 [2024-12-13 23:56:43.702374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:13.007 [2024-12-13 23:56:43.702382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:23:13.007 [2024-12-13 23:56:43.702388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.739928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.739956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.268 [2024-12-13 23:56:43.739966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.739973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.740028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.740035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.268 [2024-12-13 23:56:43.740044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.740049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.740107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.740116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.268 [2024-12-13 23:56:43.740124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.740130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.740149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.740156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.268 [2024-12-13 23:56:43.740164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.740170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.802503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.802641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.268 [2024-12-13 23:56:43.802658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.802666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.268 [2024-12-13 23:56:43.826550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.268 [2024-12-13 23:56:43.826639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.268 [2024-12-13 23:56:43.826699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.268 [2024-12-13 23:56:43.826799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:13.268 [2024-12-13 23:56:43.826849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.268 [2024-12-13 23:56:43.826906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.826953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.268 [2024-12-13 23:56:43.826961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.268 [2024-12-13 23:56:43.826969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.268 [2024-12-13 23:56:43.826974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.268 [2024-12-13 23:56:43.827097] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.722 ms, result 0 00:23:13.268 true 00:23:13.268 23:56:43 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76126 00:23:13.268 23:56:43 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76126 00:23:13.268 23:56:43 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:13.268 [2024-12-13 23:56:43.910434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:13.268 [2024-12-13 23:56:43.910575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76720 ] 00:23:13.529 [2024-12-13 23:56:44.058887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.529 [2024-12-13 23:56:44.240444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.916  [2024-12-13T23:56:46.588Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-13T23:56:47.529Z] Copying: 512/1024 [MB] (257 MBps) [2024-12-13T23:56:48.471Z] Copying: 766/1024 [MB] (254 MBps) [2024-12-13T23:56:48.730Z] Copying: 1012/1024 [MB] (246 MBps) [2024-12-13T23:56:49.300Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:23:18.568 00:23:18.568 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76126 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:18.568 23:56:49 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:18.568 [2024-12-13 23:56:49.234056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:18.568 [2024-12-13 23:56:49.234308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76775 ] 00:23:18.829 [2024-12-13 23:56:49.381678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.829 [2024-12-13 23:56:49.551829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.090 [2024-12-13 23:56:49.780470] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.090 [2024-12-13 23:56:49.780534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.350 [2024-12-13 23:56:49.840744] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:19.350 [2024-12-13 23:56:49.841051] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:19.350 [2024-12-13 23:56:49.841359] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:19.613 [2024-12-13 23:56:50.190790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.190826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:19.613 [2024-12-13 23:56:50.190837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:19.613 [2024-12-13 23:56:50.190844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.190879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.190887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.613 [2024-12-13 23:56:50.190895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:19.613 [2024-12-13 23:56:50.190901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.190914] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:19.613 [2024-12-13 23:56:50.191463] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:19.613 [2024-12-13 23:56:50.191478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.191502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.613 [2024-12-13 23:56:50.191509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:23:19.613 [2024-12-13 23:56:50.191514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.192807] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:19.613 [2024-12-13 23:56:50.203408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.203601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:19.613 [2024-12-13 23:56:50.203617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.602 ms 00:23:19.613 [2024-12-13 23:56:50.203623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.203742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.203760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:19.613 [2024-12-13 23:56:50.203767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:19.613 [2024-12-13 23:56:50.203773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.210051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.210076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.613 [2024-12-13 23:56:50.210084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:23:19.613 [2024-12-13 23:56:50.210090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.210164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.210171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.613 [2024-12-13 23:56:50.210178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:19.613 [2024-12-13 23:56:50.210184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.210214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.210221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:19.613 [2024-12-13 23:56:50.210228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:19.613 [2024-12-13 23:56:50.210234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.210253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.613 [2024-12-13 23:56:50.213400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.213422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.613 [2024-12-13 23:56:50.213429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:23:19.613 [2024-12-13 23:56:50.213437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.213470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.213477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:19.613 [2024-12-13 23:56:50.213495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:19.613 [2024-12-13 23:56:50.213501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.213515] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:19.613 [2024-12-13 23:56:50.213586] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:19.613 [2024-12-13 23:56:50.213614] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:19.613 [2024-12-13 23:56:50.213628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:19.613 [2024-12-13 23:56:50.213688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:19.613 [2024-12-13 23:56:50.213695] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:19.613 [2024-12-13 23:56:50.213704] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:19.613 [2024-12-13 23:56:50.213712] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:19.613 [2024-12-13 23:56:50.213719] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:19.613 [2024-12-13 23:56:50.213725] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:19.613 [2024-12-13 23:56:50.213731] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:19.613 [2024-12-13 23:56:50.213736] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:19.613 [2024-12-13 23:56:50.213742] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:19.613 [2024-12-13 23:56:50.213749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.213755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:19.613 [2024-12-13 23:56:50.213761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:23:19.613 [2024-12-13 23:56:50.213766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.213811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.613 [2024-12-13 23:56:50.213818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:19.613 [2024-12-13 23:56:50.213824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:19.613 [2024-12-13 23:56:50.213829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.613 [2024-12-13 23:56:50.213887] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:19.613 [2024-12-13 23:56:50.213896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:19.613 [2024-12-13 23:56:50.213905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.613 [2024-12-13 23:56:50.213911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.613 [2024-12-13 23:56:50.213917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:19.613 [2024-12-13 23:56:50.213924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:19.613 [2024-12-13 23:56:50.213929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:19.613 [2024-12-13 23:56:50.213935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:19.613 [2024-12-13 23:56:50.213940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:19.613 [2024-12-13 23:56:50.213945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.613 [2024-12-13 23:56:50.213955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:19.613 [2024-12-13 23:56:50.213960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:19.613 [2024-12-13 23:56:50.213971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.614 [2024-12-13 23:56:50.213976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:19.614 [2024-12-13 23:56:50.213981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:19.614 [2024-12-13 23:56:50.213986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.614 [2024-12-13 23:56:50.213992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:19.614 [2024-12-13 23:56:50.213997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:19.614 [2024-12-13 23:56:50.214002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:19.614 [2024-12-13 23:56:50.214012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:19.614 [2024-12-13 23:56:50.214017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:19.614 [2024-12-13 23:56:50.214027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:19.614 [2024-12-13 23:56:50.214042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:19.614 [2024-12-13 23:56:50.214057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:19.614 [2024-12-13 23:56:50.214072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:19.614 [2024-12-13 23:56:50.214087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.614 [2024-12-13 23:56:50.214097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:19.614 [2024-12-13 23:56:50.214102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:19.614 [2024-12-13 23:56:50.214107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.614 [2024-12-13 23:56:50.214112] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:19.614 [2024-12-13 23:56:50.214118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:19.614 [2024-12-13 23:56:50.214125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.614 [2024-12-13 23:56:50.214138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:19.614 [2024-12-13 23:56:50.214143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:19.614 [2024-12-13 23:56:50.214148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:19.614 [2024-12-13 23:56:50.214154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:19.614 [2024-12-13 23:56:50.214158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:19.614 [2024-12-13 23:56:50.214163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:19.614 [2024-12-13 23:56:50.214169] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:19.614 [2024-12-13 23:56:50.214175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.614 [2024-12-13 23:56:50.214182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:19.614 [2024-12-13 23:56:50.214187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:19.614 [2024-12-13 23:56:50.214192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:19.614 [2024-12-13 23:56:50.214198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:19.614 [2024-12-13 23:56:50.214203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:19.614 [2024-12-13 23:56:50.214208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:19.614 [2024-12-13 23:56:50.214214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:19.614 [2024-12-13 23:56:50.214219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:19.614 [2024-12-13 23:56:50.214224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:19.614 [2024-12-13 23:56:50.214230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:19.614 [2024-12-13 23:56:50.214235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:19.614 [2024-12-13 23:56:50.214242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:19.614 [2024-12-13 23:56:50.214248] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:19.614 [2024-12-13 23:56:50.214253] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:19.614 [2024-12-13 23:56:50.214259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.614 [2024-12-13 23:56:50.214268] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:19.614 [2024-12-13 23:56:50.214274] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:19.614 [2024-12-13 23:56:50.214279] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:19.614 [2024-12-13 23:56:50.214284] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:19.614 [2024-12-13 23:56:50.214290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.214296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:19.614 [2024-12-13 23:56:50.214302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:23:19.614 [2024-12-13 23:56:50.214311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.228272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.228300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.614 [2024-12-13 23:56:50.228310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.924 ms 00:23:19.614 [2024-12-13 23:56:50.228316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.228384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.228391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:19.614 [2024-12-13 23:56:50.228398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:19.614 [2024-12-13 23:56:50.228405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.266263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.266297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.614 [2024-12-13 23:56:50.266307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.823 ms 00:23:19.614 [2024-12-13 23:56:50.266315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.266350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.266359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.614 [2024-12-13 23:56:50.266367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.614 [2024-12-13 23:56:50.266376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.266815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.266830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.614 [2024-12-13 23:56:50.266838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:23:19.614 [2024-12-13 23:56:50.266844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.266941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.266954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.614 [2024-12-13 23:56:50.266962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:19.614 [2024-12-13 23:56:50.266968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.279661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.279686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.614 [2024-12-13 23:56:50.279694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.676 ms 00:23:19.614 [2024-12-13 23:56:50.279701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.290683] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:19.614 [2024-12-13 23:56:50.290715] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:19.614 [2024-12-13 23:56:50.290725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.290733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:19.614 [2024-12-13 23:56:50.290740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.948 ms 00:23:19.614 [2024-12-13 23:56:50.290746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.309754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.309781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:19.614 [2024-12-13 23:56:50.309795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.974 ms 00:23:19.614 [2024-12-13 23:56:50.309801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.614 [2024-12-13 23:56:50.319580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.614 [2024-12-13 23:56:50.319605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:19.615 [2024-12-13 23:56:50.319613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.743 ms 00:23:19.615 [2024-12-13 23:56:50.319627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.615 [2024-12-13 23:56:50.328785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.615 [2024-12-13 23:56:50.328940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:19.615 [2024-12-13 23:56:50.328953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.131 ms 00:23:19.615 [2024-12-13 23:56:50.328960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.615 [2024-12-13 23:56:50.329230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.615 [2024-12-13 23:56:50.329240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.615 [2024-12-13 23:56:50.329247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:19.615 [2024-12-13 23:56:50.329253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.378673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.378761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:19.877 [2024-12-13 23:56:50.378771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.406 ms 00:23:19.877 [2024-12-13 23:56:50.378778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.386995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:19.877 [2024-12-13 23:56:50.389275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.389300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.877 [2024-12-13 23:56:50.389309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.460 ms 00:23:19.877 [2024-12-13 23:56:50.389315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.389362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.389370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:19.877 [2024-12-13 23:56:50.389377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:19.877 [2024-12-13 23:56:50.389384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.389437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.389445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.877 [2024-12-13 23:56:50.389452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:19.877 [2024-12-13 23:56:50.389458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.390549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.390573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:19.877 [2024-12-13 23:56:50.390580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:23:19.877 [2024-12-13 23:56:50.390590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.390613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.390619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.877 [2024-12-13 23:56:50.390629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:19.877 [2024-12-13 23:56:50.390634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.390664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:19.877 [2024-12-13 23:56:50.390672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.390678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:19.877 [2024-12-13 23:56:50.390685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:19.877 [2024-12-13 23:56:50.390691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.409817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.409854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.877 [2024-12-13 23:56:50.409865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.109 ms 00:23:19.877 [2024-12-13 23:56:50.409872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.409931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.877 [2024-12-13 23:56:50.409939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.877 [2024-12-13 23:56:50.409946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:19.877 [2024-12-13 23:56:50.409953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.877 [2024-12-13 23:56:50.410866] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.685 ms, result 0 00:23:20.822  [2024-12-13T23:56:52.499Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-13T23:56:53.443Z] Copying: 26/1024 [MB] (11 MBps) [2024-12-13T23:56:54.831Z] Copying: 38/1024 [MB] (11 MBps) [2024-12-13T23:56:55.773Z] Copying: 49/1024 [MB] (11 MBps) [2024-12-13T23:56:56.717Z] Copying: 60/1024 [MB] (11 MBps) [2024-12-13T23:56:57.661Z] Copying: 70/1024 [MB] (10 MBps) [2024-12-13T23:56:58.604Z] Copying: 81/1024 [MB] (10 MBps) [2024-12-13T23:56:59.552Z] Copying: 92/1024 [MB] (11 MBps) [2024-12-13T23:57:00.551Z] Copying: 104/1024 [MB] (11 MBps) [2024-12-13T23:57:01.495Z] Copying: 115/1024 [MB] (11 MBps) [2024-12-13T23:57:02.438Z] Copying: 125/1024 [MB] (10 MBps) [2024-12-13T23:57:03.823Z] Copying: 136/1024 [MB] (10 MBps) [2024-12-13T23:57:04.767Z] Copying: 147/1024 [MB] (10 MBps) [2024-12-13T23:57:05.710Z] Copying: 158/1024 [MB] (11 MBps) [2024-12-13T23:57:06.653Z] Copying: 169/1024 [MB] (11 MBps) [2024-12-13T23:57:07.598Z] Copying: 180/1024 [MB] (11 MBps) [2024-12-13T23:57:08.542Z] Copying: 191/1024 [MB] (11 MBps) [2024-12-13T23:57:09.484Z] Copying: 202/1024 [MB] (10 MBps) [2024-12-13T23:57:10.426Z] Copying: 213/1024 [MB] (11 MBps) [2024-12-13T23:57:11.812Z] Copying: 224/1024 [MB] (11 MBps) [2024-12-13T23:57:12.754Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-13T23:57:13.697Z] Copying: 246/1024 [MB] (11 MBps) [2024-12-13T23:57:14.639Z] Copying: 257/1024 [MB] (11 MBps) [2024-12-13T23:57:15.582Z] Copying: 269/1024 [MB] (11 MBps) [2024-12-13T23:57:16.526Z] Copying: 280/1024 [MB] (11 MBps) [2024-12-13T23:57:17.471Z] Copying: 291/1024 [MB] (11 MBps) [2024-12-13T23:57:18.859Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-13T23:57:19.434Z] Copying: 313/1024 [MB] (11 MBps) [2024-12-13T23:57:20.823Z] Copying: 324/1024 [MB] (10 MBps) [2024-12-13T23:57:21.768Z] Copying: 335/1024 [MB] (10 MBps) [2024-12-13T23:57:22.714Z] Copying: 353016/1048576 [kB] (9736 kBps) [2024-12-13T23:57:23.663Z] Copying: 354/1024 [MB] (10 MBps) [2024-12-13T23:57:24.607Z] Copying: 364/1024 [MB] (10 MBps) [2024-12-13T23:57:25.594Z] Copying: 375/1024 [MB] (10 MBps) [2024-12-13T23:57:26.554Z] Copying: 389/1024 [MB] (14 MBps) [2024-12-13T23:57:27.487Z] Copying: 442/1024 [MB] (53 MBps) [2024-12-13T23:57:28.860Z] Copying: 495/1024 [MB] (52 MBps) [2024-12-13T23:57:29.793Z] Copying: 548/1024 [MB] (53 MBps) [2024-12-13T23:57:30.726Z] Copying: 603/1024 [MB] (54 MBps) [2024-12-13T23:57:31.656Z] Copying: 657/1024 [MB] (54 MBps) [2024-12-13T23:57:32.598Z] Copying: 696/1024 [MB] (38 MBps) [2024-12-13T23:57:33.539Z] Copying: 723/1024 [MB] (26 MBps) [2024-12-13T23:57:34.484Z] Copying: 750720/1048576 [kB] (10064 kBps) [2024-12-13T23:57:35.430Z] Copying: 760904/1048576 [kB] (10184 kBps) [2024-12-13T23:57:36.818Z] Copying: 753/1024 [MB] (10 MBps) [2024-12-13T23:57:37.764Z] Copying: 763/1024 [MB] (10 MBps) [2024-12-13T23:57:38.706Z] Copying: 776/1024 [MB] (13 MBps) [2024-12-13T23:57:39.648Z] Copying: 793/1024 [MB] (16 MBps) [2024-12-13T23:57:40.593Z] Copying: 815/1024 [MB] (22 MBps) [2024-12-13T23:57:41.538Z] Copying: 831/1024 [MB] (16 MBps) [2024-12-13T23:57:42.482Z] Copying: 845/1024 [MB] (14 MBps) [2024-12-13T23:57:43.869Z] Copying: 863/1024 [MB] (17 MBps) [2024-12-13T23:57:44.442Z] Copying: 878/1024 [MB] (14 MBps) [2024-12-13T23:57:45.826Z] Copying: 891/1024 [MB] (13 MBps) [2024-12-13T23:57:46.769Z] Copying: 908/1024 [MB] (16 MBps) [2024-12-13T23:57:47.710Z] Copying: 921/1024 [MB] (13 MBps) [2024-12-13T23:57:48.649Z] Copying: 936/1024 [MB] (15 MBps) [2024-12-13T23:57:49.587Z] Copying: 947/1024 [MB] (10 MBps) [2024-12-13T23:57:50.520Z] Copying: 980344/1048576 [kB] (9968 kBps) [2024-12-13T23:57:51.485Z] Copying: 1007/1024 [MB] (49 MBps) [2024-12-13T23:57:51.772Z] Copying: 1023/1024 [MB] (16 MBps) [2024-12-13T23:57:51.772Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-13 23:57:51.701215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.040 [2024-12-13 23:57:51.701295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:21.040 [2024-12-13 23:57:51.701312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:21.040 [2024-12-13 23:57:51.701322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.040 [2024-12-13 23:57:51.701350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:21.040 [2024-12-13 23:57:51.704289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.040 [2024-12-13 23:57:51.704333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:21.040 [2024-12-13 23:57:51.704352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:24:21.040 [2024-12-13 23:57:51.704360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.040 [2024-12-13 23:57:51.716103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.040 [2024-12-13 23:57:51.716151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:21.040 [2024-12-13 23:57:51.716163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.317 ms 00:24:21.040 [2024-12-13 23:57:51.716172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.040 [2024-12-13 23:57:51.741833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.040 [2024-12-13 23:57:51.741882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:21.040 [2024-12-13 23:57:51.741896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.643 ms 00:24:21.040 [2024-12-13 23:57:51.741905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.040 [2024-12-13 23:57:51.748081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.040 [2024-12-13 23:57:51.748122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:21.040 [2024-12-13 23:57:51.748134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:24:21.040 [2024-12-13 23:57:51.748142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.301 [2024-12-13 23:57:51.775340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.301 [2024-12-13 23:57:51.775566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:21.301 [2024-12-13 23:57:51.775588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.136 ms 00:24:21.301 [2024-12-13 23:57:51.775598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.301 [2024-12-13 23:57:51.791493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.301 [2024-12-13 23:57:51.791538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:21.301 [2024-12-13 23:57:51.791551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:24:21.301 [2024-12-13 23:57:51.791559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.563 [2024-12-13 23:57:52.081642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.563 [2024-12-13 23:57:52.081697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:21.563 [2024-12-13 23:57:52.081710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 290.032 ms 00:24:21.563 [2024-12-13 23:57:52.081719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.563 [2024-12-13 23:57:52.107689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.563 [2024-12-13 23:57:52.107733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:21.563 [2024-12-13 23:57:52.107745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.950 ms 00:24:21.563 [2024-12-13 23:57:52.107753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.563 [2024-12-13 23:57:52.133229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.563 [2024-12-13 23:57:52.133272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:21.563 [2024-12-13 23:57:52.133283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.432 ms 00:24:21.563 [2024-12-13 23:57:52.133291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.563 [2024-12-13 23:57:52.158207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.563 [2024-12-13 23:57:52.158250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:21.563 [2024-12-13 23:57:52.158262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.872 ms 00:24:21.563 [2024-12-13 23:57:52.158270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.563 [2024-12-13 23:57:52.183273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.563 [2024-12-13 23:57:52.183315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:21.563 [2024-12-13 23:57:52.183326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.920 ms 00:24:21.563 [2024-12-13 23:57:52.183334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.563 [2024-12-13 23:57:52.183375] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:21.563 [2024-12-13 23:57:52.183390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102144 / 261120 wr_cnt: 1 state: open 00:24:21.563 [2024-12-13 23:57:52.183402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:21.563 [2024-12-13 23:57:52.183717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.183993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:21.564 [2024-12-13 23:57:52.184274] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:21.564 [2024-12-13 23:57:52.184285] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f2f7aca-291e-475a-a6a4-424021edf1a7 00:24:21.564 [2024-12-13 23:57:52.184295] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102144 00:24:21.564 [2024-12-13 23:57:52.184303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103104 00:24:21.564 [2024-12-13 23:57:52.184310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102144 00:24:21.564 [2024-12-13 23:57:52.184326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:24:21.564 [2024-12-13 23:57:52.184334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:21.564 [2024-12-13 23:57:52.184342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:21.564 [2024-12-13 23:57:52.184351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:21.564 [2024-12-13 23:57:52.184358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:21.564 [2024-12-13 23:57:52.184365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:21.564 [2024-12-13 23:57:52.184372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.564 [2024-12-13 23:57:52.184380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:21.564 [2024-12-13 23:57:52.184389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:24:21.564 [2024-12-13 23:57:52.184396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.564 [2024-12-13 23:57:52.197735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.564 [2024-12-13 23:57:52.197920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:21.564 [2024-12-13 23:57:52.197939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.303 ms 00:24:21.564 [2024-12-13 23:57:52.197948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.564 [2024-12-13 23:57:52.198176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.564 [2024-12-13 23:57:52.198187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:21.564 [2024-12-13 23:57:52.198202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:24:21.564 [2024-12-13 23:57:52.198210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.564 [2024-12-13 23:57:52.236960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.564 [2024-12-13 23:57:52.237137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.564 [2024-12-13 23:57:52.237157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.564 [2024-12-13 23:57:52.237167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.564 [2024-12-13 23:57:52.237236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.564 [2024-12-13 23:57:52.237246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.564 [2024-12-13 23:57:52.237260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.564 [2024-12-13 23:57:52.237269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.564 [2024-12-13 23:57:52.237352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.564 [2024-12-13 23:57:52.237365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.564 [2024-12-13 23:57:52.237375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.565 [2024-12-13 23:57:52.237383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.565 [2024-12-13 23:57:52.237399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.565 [2024-12-13 23:57:52.237407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.565 [2024-12-13 23:57:52.237415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.565 [2024-12-13 23:57:52.237429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.318051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.318100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.826 [2024-12-13 23:57:52.318111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.318120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.826 [2024-12-13 23:57:52.350323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.826 [2024-12-13 23:57:52.350417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.826 [2024-12-13 23:57:52.350524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.826 [2024-12-13 23:57:52.350662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:21.826 [2024-12-13 23:57:52.350718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:21.826 [2024-12-13 23:57:52.350789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.350841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.826 [2024-12-13 23:57:52.350851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:21.826 [2024-12-13 23:57:52.350860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.826 [2024-12-13 23:57:52.350870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.826 [2024-12-13 23:57:52.351005] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 649.761 ms, result 0 00:24:23.212 00:24:23.212 00:24:23.212 23:57:53 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:25.757 23:57:56 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:25.757 [2024-12-13 23:57:56.113000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:25.757 [2024-12-13 23:57:56.113388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77465 ] 00:24:25.757 [2024-12-13 23:57:56.269242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.757 [2024-12-13 23:57:56.457861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.018 [2024-12-13 23:57:56.747540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.280 [2024-12-13 23:57:56.747858] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.280 [2024-12-13 23:57:56.903713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.280 [2024-12-13 23:57:56.903767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:26.280 [2024-12-13 23:57:56.903782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.280 [2024-12-13 23:57:56.903793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.280 [2024-12-13 23:57:56.903847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.280 [2024-12-13 23:57:56.903859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.280 [2024-12-13 23:57:56.903868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:26.280 [2024-12-13 23:57:56.903876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.280 [2024-12-13 23:57:56.903896] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:26.280 [2024-12-13 23:57:56.904685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:26.280 [2024-12-13 23:57:56.904706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.280 [2024-12-13 23:57:56.904715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.280 [2024-12-13 23:57:56.904726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:24:26.280 [2024-12-13 23:57:56.904733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.280 [2024-12-13 23:57:56.906370] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:26.280 [2024-12-13 23:57:56.921244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.280 [2024-12-13 23:57:56.921291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:26.280 [2024-12-13 23:57:56.921305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.876 ms 00:24:26.280 [2024-12-13 23:57:56.921313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.280 [2024-12-13 23:57:56.921387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.280 [2024-12-13 23:57:56.921398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:26.280 [2024-12-13 23:57:56.921408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:26.280 [2024-12-13 23:57:56.921415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.280 [2024-12-13 23:57:56.929397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.280 [2024-12-13 23:57:56.929608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.281 [2024-12-13 23:57:56.929627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.875 ms 00:24:26.281 [2024-12-13 23:57:56.929636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.929732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.281 [2024-12-13 23:57:56.929742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.281 [2024-12-13 23:57:56.929753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:26.281 [2024-12-13 23:57:56.929762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.929806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.281 [2024-12-13 23:57:56.929816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:26.281 [2024-12-13 23:57:56.929825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:26.281 [2024-12-13 23:57:56.929832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.929864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.281 [2024-12-13 23:57:56.933957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.281 [2024-12-13 23:57:56.933993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.281 [2024-12-13 23:57:56.934003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:24:26.281 [2024-12-13 23:57:56.934010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.934047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.281 [2024-12-13 23:57:56.934055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:26.281 [2024-12-13 23:57:56.934063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:26.281 [2024-12-13 23:57:56.934073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.934123] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:26.281 [2024-12-13 23:57:56.934145] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:26.281 [2024-12-13 23:57:56.934180] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:26.281 [2024-12-13 23:57:56.934196] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:26.281 [2024-12-13 23:57:56.934270] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:26.281 [2024-12-13 23:57:56.934280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:26.281 [2024-12-13 23:57:56.934293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:26.281 [2024-12-13 23:57:56.934304] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934312] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934321] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:26.281 [2024-12-13 23:57:56.934329] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:26.281 [2024-12-13 23:57:56.934337] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:26.281 [2024-12-13 23:57:56.934345] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:26.281 [2024-12-13 23:57:56.934353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.281 [2024-12-13 23:57:56.934362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:26.281 [2024-12-13 23:57:56.934370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:24:26.281 [2024-12-13 23:57:56.934377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.934439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.281 [2024-12-13 23:57:56.934448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:26.281 [2024-12-13 23:57:56.934455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:26.281 [2024-12-13 23:57:56.934462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.281 [2024-12-13 23:57:56.934552] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:26.281 [2024-12-13 23:57:56.934564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:26.281 [2024-12-13 23:57:56.934573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934590] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:26.281 [2024-12-13 23:57:56.934596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:26.281 [2024-12-13 23:57:56.934619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.281 [2024-12-13 23:57:56.934633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:26.281 [2024-12-13 23:57:56.934641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:26.281 [2024-12-13 23:57:56.934647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.281 [2024-12-13 23:57:56.934656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:26.281 [2024-12-13 23:57:56.934663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:26.281 [2024-12-13 23:57:56.934670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:26.281 [2024-12-13 23:57:56.934691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:26.281 [2024-12-13 23:57:56.934698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:26.281 [2024-12-13 23:57:56.934711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:26.281 [2024-12-13 23:57:56.934719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:26.281 [2024-12-13 23:57:56.934733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:26.281 [2024-12-13 23:57:56.934754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:26.281 [2024-12-13 23:57:56.934775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:26.281 [2024-12-13 23:57:56.934795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:26.281 [2024-12-13 23:57:56.934817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.281 [2024-12-13 23:57:56.934830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:26.281 [2024-12-13 23:57:56.934838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:26.281 [2024-12-13 23:57:56.934845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.281 [2024-12-13 23:57:56.934851] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:26.281 [2024-12-13 23:57:56.934869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:26.281 [2024-12-13 23:57:56.934878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.281 [2024-12-13 23:57:56.934892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:26.281 [2024-12-13 23:57:56.934902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:26.281 [2024-12-13 23:57:56.934909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:26.281 [2024-12-13 23:57:56.934916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:26.281 [2024-12-13 23:57:56.934922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:26.281 [2024-12-13 23:57:56.934928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:26.281 [2024-12-13 23:57:56.934937] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:26.281 [2024-12-13 23:57:56.934946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.281 [2024-12-13 23:57:56.934955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:26.281 [2024-12-13 23:57:56.934962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:26.281 [2024-12-13 23:57:56.934971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:26.281 [2024-12-13 23:57:56.934978] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:26.281 [2024-12-13 23:57:56.934986] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:26.281 [2024-12-13 23:57:56.934993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:26.281 [2024-12-13 23:57:56.935001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:26.281 [2024-12-13 23:57:56.935008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:26.281 [2024-12-13 23:57:56.935015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:26.281 [2024-12-13 23:57:56.935022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:26.281 [2024-12-13 23:57:56.935029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:26.282 [2024-12-13 23:57:56.935037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:26.282 [2024-12-13 23:57:56.935045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:26.282 [2024-12-13 23:57:56.935051] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:26.282 [2024-12-13 23:57:56.935059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.282 [2024-12-13 23:57:56.935068] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:26.282 [2024-12-13 23:57:56.935075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:26.282 [2024-12-13 23:57:56.935082] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:26.282 [2024-12-13 23:57:56.935088] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:26.282 [2024-12-13 23:57:56.935096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:56.935104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:26.282 [2024-12-13 23:57:56.935111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:24:26.282 [2024-12-13 23:57:56.935118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.282 [2024-12-13 23:57:56.953049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:56.953226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.282 [2024-12-13 23:57:56.953246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.889 ms 00:24:26.282 [2024-12-13 23:57:56.953261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.282 [2024-12-13 23:57:56.953354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:56.953363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.282 [2024-12-13 23:57:56.953371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:26.282 [2024-12-13 23:57:56.953379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.282 [2024-12-13 23:57:56.999693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:56.999880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.282 [2024-12-13 23:57:56.999901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.261 ms 00:24:26.282 [2024-12-13 23:57:56.999910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.282 [2024-12-13 23:57:56.999969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:56.999981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.282 [2024-12-13 23:57:56.999990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:26.282 [2024-12-13 23:57:56.999997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.282 [2024-12-13 23:57:57.000574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:57.000595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.282 [2024-12-13 23:57:57.000605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:24:26.282 [2024-12-13 23:57:57.000619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.282 [2024-12-13 23:57:57.000747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.282 [2024-12-13 23:57:57.000757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.282 [2024-12-13 23:57:57.000766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:26.282 [2024-12-13 23:57:57.000774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.017241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.017286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.544 [2024-12-13 23:57:57.017298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.444 ms 00:24:26.544 [2024-12-13 23:57:57.017306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.031879] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:26.544 [2024-12-13 23:57:57.031922] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.544 [2024-12-13 23:57:57.031945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.031954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.544 [2024-12-13 23:57:57.031964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.532 ms 00:24:26.544 [2024-12-13 23:57:57.031972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.057975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.058020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.544 [2024-12-13 23:57:57.058033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.951 ms 00:24:26.544 [2024-12-13 23:57:57.058041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.070798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.070840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.544 [2024-12-13 23:57:57.070850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.709 ms 00:24:26.544 [2024-12-13 23:57:57.070857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.083102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.083143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.544 [2024-12-13 23:57:57.083165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.200 ms 00:24:26.544 [2024-12-13 23:57:57.083172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.083582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.083596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.544 [2024-12-13 23:57:57.083606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:26.544 [2024-12-13 23:57:57.083614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.148242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.148296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.544 [2024-12-13 23:57:57.148312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.609 ms 00:24:26.544 [2024-12-13 23:57:57.148321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.159619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:26.544 [2024-12-13 23:57:57.162383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.162576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.544 [2024-12-13 23:57:57.162596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.002 ms 00:24:26.544 [2024-12-13 23:57:57.162612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.162682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.162693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.544 [2024-12-13 23:57:57.162702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.544 [2024-12-13 23:57:57.162711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.164087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.164133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.544 [2024-12-13 23:57:57.164144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:24:26.544 [2024-12-13 23:57:57.164152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.165574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.165610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:26.544 [2024-12-13 23:57:57.165620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:24:26.544 [2024-12-13 23:57:57.165628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.165664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.165672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.544 [2024-12-13 23:57:57.165687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.544 [2024-12-13 23:57:57.165694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.165730] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.544 [2024-12-13 23:57:57.165741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.165751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.544 [2024-12-13 23:57:57.165760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:26.544 [2024-12-13 23:57:57.165768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.191675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.191719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.544 [2024-12-13 23:57:57.191732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.886 ms 00:24:26.544 [2024-12-13 23:57:57.191741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.191827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.544 [2024-12-13 23:57:57.191837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.544 [2024-12-13 23:57:57.191846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:26.544 [2024-12-13 23:57:57.191854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.544 [2024-12-13 23:57:57.197937] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.926 ms, result 0 00:24:27.930  [2024-12-13T23:57:59.606Z] Copying: 1088/1048576 [kB] (1088 kBps) [2024-12-13T23:58:00.549Z] Copying: 4232/1048576 [kB] (3144 kBps) [2024-12-13T23:58:01.494Z] Copying: 14248/1048576 [kB] (10016 kBps) [2024-12-13T23:58:02.434Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-13T23:58:03.394Z] Copying: 60/1024 [MB] (30 MBps) [2024-12-13T23:58:04.775Z] Copying: 87/1024 [MB] (27 MBps) [2024-12-13T23:58:05.717Z] Copying: 114/1024 [MB] (27 MBps) [2024-12-13T23:58:06.657Z] Copying: 131/1024 [MB] (16 MBps) [2024-12-13T23:58:07.602Z] Copying: 164/1024 [MB] (32 MBps) [2024-12-13T23:58:08.545Z] Copying: 187/1024 [MB] (22 MBps) [2024-12-13T23:58:09.488Z] Copying: 206/1024 [MB] (19 MBps) [2024-12-13T23:58:10.433Z] Copying: 224/1024 [MB] (18 MBps) [2024-12-13T23:58:11.377Z] Copying: 253/1024 [MB] (28 MBps) [2024-12-13T23:58:12.766Z] Copying: 284/1024 [MB] (30 MBps) [2024-12-13T23:58:13.708Z] Copying: 312/1024 [MB] (28 MBps) [2024-12-13T23:58:14.649Z] Copying: 329/1024 [MB] (17 MBps) [2024-12-13T23:58:15.590Z] Copying: 354/1024 [MB] (24 MBps) [2024-12-13T23:58:16.531Z] Copying: 374/1024 [MB] (19 MBps) [2024-12-13T23:58:17.519Z] Copying: 401/1024 [MB] (26 MBps) [2024-12-13T23:58:18.490Z] Copying: 418/1024 [MB] (17 MBps) [2024-12-13T23:58:19.426Z] Copying: 444/1024 [MB] (25 MBps) [2024-12-13T23:58:20.808Z] Copying: 473/1024 [MB] (29 MBps) [2024-12-13T23:58:21.380Z] Copying: 494/1024 [MB] (20 MBps) [2024-12-13T23:58:22.760Z] Copying: 526/1024 [MB] (31 MBps) [2024-12-13T23:58:23.702Z] Copying: 545/1024 [MB] (19 MBps) [2024-12-13T23:58:24.648Z] Copying: 583/1024 [MB] (38 MBps) [2024-12-13T23:58:25.586Z] Copying: 607/1024 [MB] (23 MBps) [2024-12-13T23:58:26.520Z] Copying: 631/1024 [MB] (24 MBps) [2024-12-13T23:58:27.464Z] Copying: 660/1024 [MB] (28 MBps) [2024-12-13T23:58:28.404Z] Copying: 675/1024 [MB] (15 MBps) [2024-12-13T23:58:29.792Z] Copying: 702/1024 [MB] (26 MBps) [2024-12-13T23:58:30.737Z] Copying: 731/1024 [MB] (28 MBps) [2024-12-13T23:58:31.679Z] Copying: 747/1024 [MB] (15 MBps) [2024-12-13T23:58:32.622Z] Copying: 766/1024 [MB] (19 MBps) [2024-12-13T23:58:33.564Z] Copying: 796/1024 [MB] (29 MBps) [2024-12-13T23:58:34.508Z] Copying: 819/1024 [MB] (23 MBps) [2024-12-13T23:58:35.450Z] Copying: 835/1024 [MB] (15 MBps) [2024-12-13T23:58:36.394Z] Copying: 864/1024 [MB] (29 MBps) [2024-12-13T23:58:37.774Z] Copying: 882/1024 [MB] (17 MBps) [2024-12-13T23:58:38.717Z] Copying: 905/1024 [MB] (22 MBps) [2024-12-13T23:58:39.662Z] Copying: 937/1024 [MB] (32 MBps) [2024-12-13T23:58:40.605Z] Copying: 965/1024 [MB] (27 MBps) [2024-12-13T23:58:41.551Z] Copying: 984/1024 [MB] (19 MBps) [2024-12-13T23:58:42.493Z] Copying: 1000/1024 [MB] (15 MBps) [2024-12-13T23:58:42.493Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-13 23:58:42.363564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.363646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:11.761 [2024-12-13 23:58:42.363665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:11.761 [2024-12-13 23:58:42.363675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.363705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:11.761 [2024-12-13 23:58:42.367450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.367885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:11.761 [2024-12-13 23:58:42.367912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:25:11.761 [2024-12-13 23:58:42.367922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.368259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.368281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:11.761 [2024-12-13 23:58:42.368291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:11.761 [2024-12-13 23:58:42.368300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.382089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.382136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:11.761 [2024-12-13 23:58:42.382151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.770 ms 00:25:11.761 [2024-12-13 23:58:42.382159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.388313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.388359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:11.761 [2024-12-13 23:58:42.388370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:25:11.761 [2024-12-13 23:58:42.388378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.415453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.415520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:11.761 [2024-12-13 23:58:42.415532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.011 ms 00:25:11.761 [2024-12-13 23:58:42.415541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.431638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.431686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:11.761 [2024-12-13 23:58:42.431698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.053 ms 00:25:11.761 [2024-12-13 23:58:42.431706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.437987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.438167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:11.761 [2024-12-13 23:58:42.438188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:25:11.761 [2024-12-13 23:58:42.438202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.464032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.464077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:11.761 [2024-12-13 23:58:42.464088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.810 ms 00:25:11.761 [2024-12-13 23:58:42.464095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.761 [2024-12-13 23:58:42.488978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.761 [2024-12-13 23:58:42.489020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:11.761 [2024-12-13 23:58:42.489032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.841 ms 00:25:11.761 [2024-12-13 23:58:42.489051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.023 [2024-12-13 23:58:42.513818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.023 [2024-12-13 23:58:42.513863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:12.023 [2024-12-13 23:58:42.513875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:25:12.023 [2024-12-13 23:58:42.513882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.023 [2024-12-13 23:58:42.538099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.023 [2024-12-13 23:58:42.538142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:12.023 [2024-12-13 23:58:42.538153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.135 ms 00:25:12.023 [2024-12-13 23:58:42.538160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.023 [2024-12-13 23:58:42.538201] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:12.023 [2024-12-13 23:58:42.538217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:12.023 [2024-12-13 23:58:42.538227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:25:12.023 [2024-12-13 23:58:42.538236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:12.023 [2024-12-13 23:58:42.538475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.538991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:12.024 [2024-12-13 23:58:42.539054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:12.024 [2024-12-13 23:58:42.539063] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f2f7aca-291e-475a-a6a4-424021edf1a7 00:25:12.024 [2024-12-13 23:58:42.539071] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:25:12.024 [2024-12-13 23:58:42.539084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164032 00:25:12.024 [2024-12-13 23:58:42.539091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162048 00:25:12.024 [2024-12-13 23:58:42.539102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:25:12.024 [2024-12-13 23:58:42.539109] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:12.024 [2024-12-13 23:58:42.539118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:12.024 [2024-12-13 23:58:42.539125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:12.024 [2024-12-13 23:58:42.539132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:12.024 [2024-12-13 23:58:42.539147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:12.024 [2024-12-13 23:58:42.539154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.024 [2024-12-13 23:58:42.539162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:12.024 [2024-12-13 23:58:42.539170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:25:12.024 [2024-12-13 23:58:42.539178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.024 [2024-12-13 23:58:42.552700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.024 [2024-12-13 23:58:42.552743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:12.024 [2024-12-13 23:58:42.552753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.490 ms 00:25:12.024 [2024-12-13 23:58:42.552762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.024 [2024-12-13 23:58:42.552989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.024 [2024-12-13 23:58:42.552999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:12.024 [2024-12-13 23:58:42.553008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:25:12.024 [2024-12-13 23:58:42.553020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.024 [2024-12-13 23:58:42.591730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.024 [2024-12-13 23:58:42.591906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.024 [2024-12-13 23:58:42.591926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.024 [2024-12-13 23:58:42.591934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.024 [2024-12-13 23:58:42.592004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.024 [2024-12-13 23:58:42.592014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.025 [2024-12-13 23:58:42.592022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.592030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.592110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.592121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.025 [2024-12-13 23:58:42.592129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.592137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.592153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.592161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.025 [2024-12-13 23:58:42.592169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.592177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.674065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.674119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.025 [2024-12-13 23:58:42.674132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.674142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.706658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.706703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:12.025 [2024-12-13 23:58:42.706714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.706722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.706793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.706803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.025 [2024-12-13 23:58:42.706812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.706821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.706862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.706872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.025 [2024-12-13 23:58:42.706881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.706889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.706985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.707000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.025 [2024-12-13 23:58:42.707008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.707016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.707052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.707062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:12.025 [2024-12-13 23:58:42.707071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.707080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.707130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.707152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.025 [2024-12-13 23:58:42.707161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.707170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.707214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.025 [2024-12-13 23:58:42.707224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.025 [2024-12-13 23:58:42.707232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.025 [2024-12-13 23:58:42.707240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.025 [2024-12-13 23:58:42.707380] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.779 ms, result 0 00:25:13.020 00:25:13.020 00:25:13.020 23:58:43 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:15.567 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:15.567 23:58:45 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:15.567 [2024-12-13 23:58:45.759352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:15.567 [2024-12-13 23:58:45.759450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77974 ] 00:25:15.567 [2024-12-13 23:58:45.904928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.567 [2024-12-13 23:58:46.099395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.829 [2024-12-13 23:58:46.386790] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:15.829 [2024-12-13 23:58:46.386869] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:15.829 [2024-12-13 23:58:46.543445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.829 [2024-12-13 23:58:46.543708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:15.829 [2024-12-13 23:58:46.543734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:15.829 [2024-12-13 23:58:46.543749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.829 [2024-12-13 23:58:46.543817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.829 [2024-12-13 23:58:46.543828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.829 [2024-12-13 23:58:46.543838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:15.829 [2024-12-13 23:58:46.543846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.829 [2024-12-13 23:58:46.543868] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:15.829 [2024-12-13 23:58:46.544673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:15.829 [2024-12-13 23:58:46.544701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.829 [2024-12-13 23:58:46.544711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.829 [2024-12-13 23:58:46.544721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:25:15.829 [2024-12-13 23:58:46.544729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.829 [2024-12-13 23:58:46.546335] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:16.092 [2024-12-13 23:58:46.560528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.560573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:16.092 [2024-12-13 23:58:46.560587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.194 ms 00:25:16.092 [2024-12-13 23:58:46.560595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.560678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.560688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:16.092 [2024-12-13 23:58:46.560697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:16.092 [2024-12-13 23:58:46.560704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.568671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.568710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.092 [2024-12-13 23:58:46.568720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.878 ms 00:25:16.092 [2024-12-13 23:58:46.568728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.568821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.568831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.092 [2024-12-13 23:58:46.568839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:16.092 [2024-12-13 23:58:46.568848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.568893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.568903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:16.092 [2024-12-13 23:58:46.568911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:16.092 [2024-12-13 23:58:46.568918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.568949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:16.092 [2024-12-13 23:58:46.573058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.573093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.092 [2024-12-13 23:58:46.573104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:25:16.092 [2024-12-13 23:58:46.573112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.573150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.573158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:16.092 [2024-12-13 23:58:46.573167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:16.092 [2024-12-13 23:58:46.573177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.573226] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:16.092 [2024-12-13 23:58:46.573249] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:16.092 [2024-12-13 23:58:46.573285] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:16.092 [2024-12-13 23:58:46.573301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:16.092 [2024-12-13 23:58:46.573376] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:16.092 [2024-12-13 23:58:46.573387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:16.092 [2024-12-13 23:58:46.573400] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:16.092 [2024-12-13 23:58:46.573411] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:16.092 [2024-12-13 23:58:46.573421] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:16.092 [2024-12-13 23:58:46.573430] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:16.092 [2024-12-13 23:58:46.573437] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:16.092 [2024-12-13 23:58:46.573445] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:16.092 [2024-12-13 23:58:46.573452] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:16.092 [2024-12-13 23:58:46.573460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.573469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:16.092 [2024-12-13 23:58:46.573498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:25:16.092 [2024-12-13 23:58:46.573506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.573571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.092 [2024-12-13 23:58:46.573579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:16.092 [2024-12-13 23:58:46.573587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:16.092 [2024-12-13 23:58:46.573595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.092 [2024-12-13 23:58:46.573668] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:16.092 [2024-12-13 23:58:46.573678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:16.092 [2024-12-13 23:58:46.573687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.092 [2024-12-13 23:58:46.573695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.092 [2024-12-13 23:58:46.573703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:16.093 [2024-12-13 23:58:46.573711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:16.093 [2024-12-13 23:58:46.573732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.093 [2024-12-13 23:58:46.573747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:16.093 [2024-12-13 23:58:46.573753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:16.093 [2024-12-13 23:58:46.573760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.093 [2024-12-13 23:58:46.573771] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:16.093 [2024-12-13 23:58:46.573778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:16.093 [2024-12-13 23:58:46.573785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:16.093 [2024-12-13 23:58:46.573806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:16.093 [2024-12-13 23:58:46.573813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:16.093 [2024-12-13 23:58:46.573826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:16.093 [2024-12-13 23:58:46.573833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:16.093 [2024-12-13 23:58:46.573847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:16.093 [2024-12-13 23:58:46.573868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:16.093 [2024-12-13 23:58:46.573889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:16.093 [2024-12-13 23:58:46.573910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:16.093 [2024-12-13 23:58:46.573931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.093 [2024-12-13 23:58:46.573945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:16.093 [2024-12-13 23:58:46.573952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:16.093 [2024-12-13 23:58:46.573958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.093 [2024-12-13 23:58:46.573965] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:16.093 [2024-12-13 23:58:46.573976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:16.093 [2024-12-13 23:58:46.573984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.093 [2024-12-13 23:58:46.573991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.093 [2024-12-13 23:58:46.573999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:16.093 [2024-12-13 23:58:46.574007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:16.093 [2024-12-13 23:58:46.574014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:16.093 [2024-12-13 23:58:46.574022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:16.093 [2024-12-13 23:58:46.574028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:16.093 [2024-12-13 23:58:46.574036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:16.093 [2024-12-13 23:58:46.574043] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:16.093 [2024-12-13 23:58:46.574053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.093 [2024-12-13 23:58:46.574062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:16.093 [2024-12-13 23:58:46.574069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:16.093 [2024-12-13 23:58:46.574076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:16.093 [2024-12-13 23:58:46.574083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:16.093 [2024-12-13 23:58:46.574091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:16.093 [2024-12-13 23:58:46.574099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:16.093 [2024-12-13 23:58:46.574106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:16.093 [2024-12-13 23:58:46.574113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:16.093 [2024-12-13 23:58:46.574120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:16.093 [2024-12-13 23:58:46.574127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:16.093 [2024-12-13 23:58:46.574135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:16.093 [2024-12-13 23:58:46.574143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:16.093 [2024-12-13 23:58:46.574151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:16.093 [2024-12-13 23:58:46.574158] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:16.093 [2024-12-13 23:58:46.574166] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.093 [2024-12-13 23:58:46.574174] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.093 [2024-12-13 23:58:46.574182] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:16.093 [2024-12-13 23:58:46.574191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:16.093 [2024-12-13 23:58:46.574198] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:16.093 [2024-12-13 23:58:46.574205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.574213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:16.093 [2024-12-13 23:58:46.574221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:25:16.093 [2024-12-13 23:58:46.574227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.593007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.593049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.093 [2024-12-13 23:58:46.593062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.737 ms 00:25:16.093 [2024-12-13 23:58:46.593077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.593172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.593181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:16.093 [2024-12-13 23:58:46.593190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:16.093 [2024-12-13 23:58:46.593199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.635808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.635842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.093 [2024-12-13 23:58:46.635853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.557 ms 00:25:16.093 [2024-12-13 23:58:46.635861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.635898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.635907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.093 [2024-12-13 23:58:46.635916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:16.093 [2024-12-13 23:58:46.635923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.636272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.636293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.093 [2024-12-13 23:58:46.636302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:16.093 [2024-12-13 23:58:46.636313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.636420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.636428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.093 [2024-12-13 23:58:46.636436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:16.093 [2024-12-13 23:58:46.636443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.650132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.650156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.093 [2024-12-13 23:58:46.650166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.670 ms 00:25:16.093 [2024-12-13 23:58:46.650173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.093 [2024-12-13 23:58:46.663078] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:16.093 [2024-12-13 23:58:46.663105] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:16.093 [2024-12-13 23:58:46.663115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.093 [2024-12-13 23:58:46.663122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:16.093 [2024-12-13 23:58:46.663130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.859 ms 00:25:16.094 [2024-12-13 23:58:46.663137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.687769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.687796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:16.094 [2024-12-13 23:58:46.687807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.597 ms 00:25:16.094 [2024-12-13 23:58:46.687815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.699707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.699731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:16.094 [2024-12-13 23:58:46.699740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.855 ms 00:25:16.094 [2024-12-13 23:58:46.699747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.711711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.711743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:16.094 [2024-12-13 23:58:46.711753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.931 ms 00:25:16.094 [2024-12-13 23:58:46.711759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.712128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.712141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:16.094 [2024-12-13 23:58:46.712149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:25:16.094 [2024-12-13 23:58:46.712156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.771021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.771057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:16.094 [2024-12-13 23:58:46.771068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.848 ms 00:25:16.094 [2024-12-13 23:58:46.771076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.781840] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:16.094 [2024-12-13 23:58:46.784137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.784164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:16.094 [2024-12-13 23:58:46.784176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.019 ms 00:25:16.094 [2024-12-13 23:58:46.784189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.784247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.784257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:16.094 [2024-12-13 23:58:46.784267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:16.094 [2024-12-13 23:58:46.784276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.784917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.784944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:16.094 [2024-12-13 23:58:46.784954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:25:16.094 [2024-12-13 23:58:46.784961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.786211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.786236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:16.094 [2024-12-13 23:58:46.786246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:25:16.094 [2024-12-13 23:58:46.786253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.786280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.786288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:16.094 [2024-12-13 23:58:46.786301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:16.094 [2024-12-13 23:58:46.786308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.786340] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:16.094 [2024-12-13 23:58:46.786349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.786359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:16.094 [2024-12-13 23:58:46.786367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:16.094 [2024-12-13 23:58:46.786374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.810679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.810715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:16.094 [2024-12-13 23:58:46.810727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.288 ms 00:25:16.094 [2024-12-13 23:58:46.810735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.810810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.094 [2024-12-13 23:58:46.810820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:16.094 [2024-12-13 23:58:46.810828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:16.094 [2024-12-13 23:58:46.810836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.094 [2024-12-13 23:58:46.811866] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.960 ms, result 0 00:25:17.483  [2024-12-13T23:58:49.158Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-13T23:58:50.104Z] Copying: 33/1024 [MB] (15 MBps) [2024-12-13T23:58:51.047Z] Copying: 50/1024 [MB] (17 MBps) [2024-12-13T23:58:51.990Z] Copying: 71/1024 [MB] (20 MBps) [2024-12-13T23:58:53.379Z] Copying: 86/1024 [MB] (15 MBps) [2024-12-13T23:58:54.323Z] Copying: 97/1024 [MB] (10 MBps) [2024-12-13T23:58:55.269Z] Copying: 108/1024 [MB] (11 MBps) [2024-12-13T23:58:56.210Z] Copying: 120/1024 [MB] (12 MBps) [2024-12-13T23:58:57.153Z] Copying: 131/1024 [MB] (10 MBps) [2024-12-13T23:58:58.097Z] Copying: 143/1024 [MB] (12 MBps) [2024-12-13T23:58:59.042Z] Copying: 154/1024 [MB] (11 MBps) [2024-12-13T23:59:00.430Z] Copying: 165/1024 [MB] (10 MBps) [2024-12-13T23:59:01.003Z] Copying: 183/1024 [MB] (18 MBps) [2024-12-13T23:59:02.392Z] Copying: 196/1024 [MB] (12 MBps) [2024-12-13T23:59:03.336Z] Copying: 206/1024 [MB] (10 MBps) [2024-12-13T23:59:04.281Z] Copying: 217/1024 [MB] (10 MBps) [2024-12-13T23:59:05.228Z] Copying: 228/1024 [MB] (10 MBps) [2024-12-13T23:59:06.173Z] Copying: 238/1024 [MB] (10 MBps) [2024-12-13T23:59:07.118Z] Copying: 248/1024 [MB] (10 MBps) [2024-12-13T23:59:08.063Z] Copying: 259/1024 [MB] (10 MBps) [2024-12-13T23:59:09.008Z] Copying: 270/1024 [MB] (11 MBps) [2024-12-13T23:59:10.010Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-13T23:59:11.404Z] Copying: 292/1024 [MB] (10 MBps) [2024-12-13T23:59:12.344Z] Copying: 303/1024 [MB] (10 MBps) [2024-12-13T23:59:13.288Z] Copying: 318/1024 [MB] (14 MBps) [2024-12-13T23:59:14.233Z] Copying: 332/1024 [MB] (14 MBps) [2024-12-13T23:59:15.178Z] Copying: 346/1024 [MB] (13 MBps) [2024-12-13T23:59:16.119Z] Copying: 363/1024 [MB] (17 MBps) [2024-12-13T23:59:17.065Z] Copying: 384/1024 [MB] (20 MBps) [2024-12-13T23:59:18.007Z] Copying: 398/1024 [MB] (14 MBps) [2024-12-13T23:59:19.397Z] Copying: 415/1024 [MB] (16 MBps) [2024-12-13T23:59:20.341Z] Copying: 430/1024 [MB] (15 MBps) [2024-12-13T23:59:21.285Z] Copying: 445/1024 [MB] (14 MBps) [2024-12-13T23:59:22.229Z] Copying: 466/1024 [MB] (20 MBps) [2024-12-13T23:59:23.174Z] Copying: 480/1024 [MB] (14 MBps) [2024-12-13T23:59:24.116Z] Copying: 497/1024 [MB] (16 MBps) [2024-12-13T23:59:25.061Z] Copying: 507/1024 [MB] (10 MBps) [2024-12-13T23:59:26.008Z] Copying: 526/1024 [MB] (18 MBps) [2024-12-13T23:59:27.397Z] Copying: 548/1024 [MB] (22 MBps) [2024-12-13T23:59:28.341Z] Copying: 566/1024 [MB] (18 MBps) [2024-12-13T23:59:29.285Z] Copying: 592/1024 [MB] (25 MBps) [2024-12-13T23:59:30.228Z] Copying: 609/1024 [MB] (17 MBps) [2024-12-13T23:59:31.170Z] Copying: 628/1024 [MB] (18 MBps) [2024-12-13T23:59:32.140Z] Copying: 643/1024 [MB] (15 MBps) [2024-12-13T23:59:33.082Z] Copying: 658/1024 [MB] (15 MBps) [2024-12-13T23:59:34.028Z] Copying: 678/1024 [MB] (19 MBps) [2024-12-13T23:59:35.418Z] Copying: 691/1024 [MB] (13 MBps) [2024-12-13T23:59:36.361Z] Copying: 710/1024 [MB] (19 MBps) [2024-12-13T23:59:37.306Z] Copying: 724/1024 [MB] (13 MBps) [2024-12-13T23:59:38.250Z] Copying: 736/1024 [MB] (12 MBps) [2024-12-13T23:59:39.193Z] Copying: 749/1024 [MB] (12 MBps) [2024-12-13T23:59:40.136Z] Copying: 759/1024 [MB] (10 MBps) [2024-12-13T23:59:41.081Z] Copying: 776/1024 [MB] (16 MBps) [2024-12-13T23:59:42.025Z] Copying: 794/1024 [MB] (17 MBps) [2024-12-13T23:59:43.411Z] Copying: 814/1024 [MB] (19 MBps) [2024-12-13T23:59:44.353Z] Copying: 825/1024 [MB] (11 MBps) [2024-12-13T23:59:45.295Z] Copying: 847/1024 [MB] (21 MBps) [2024-12-13T23:59:46.236Z] Copying: 867/1024 [MB] (20 MBps) [2024-12-13T23:59:47.179Z] Copying: 886/1024 [MB] (18 MBps) [2024-12-13T23:59:48.122Z] Copying: 906/1024 [MB] (19 MBps) [2024-12-13T23:59:49.067Z] Copying: 918/1024 [MB] (11 MBps) [2024-12-13T23:59:50.011Z] Copying: 931/1024 [MB] (13 MBps) [2024-12-13T23:59:51.402Z] Copying: 945/1024 [MB] (13 MBps) [2024-12-13T23:59:52.346Z] Copying: 958/1024 [MB] (13 MBps) [2024-12-13T23:59:53.290Z] Copying: 979/1024 [MB] (20 MBps) [2024-12-13T23:59:54.237Z] Copying: 995/1024 [MB] (16 MBps) [2024-12-13T23:59:55.189Z] Copying: 1010/1024 [MB] (14 MBps) [2024-12-13T23:59:55.469Z] Copying: 1020/1024 [MB] (10 MBps) [2024-12-13T23:59:55.469Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-13 23:59:55.386784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.386859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:24.737 [2024-12-13 23:59:55.386879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:24.737 [2024-12-13 23:59:55.386890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.386924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:24.737 [2024-12-13 23:59:55.391404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.391465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:24.737 [2024-12-13 23:59:55.391490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.459 ms 00:26:24.737 [2024-12-13 23:59:55.391500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.391807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.391823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:24.737 [2024-12-13 23:59:55.391836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:26:24.737 [2024-12-13 23:59:55.391845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.395962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.395992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:24.737 [2024-12-13 23:59:55.396009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:26:24.737 [2024-12-13 23:59:55.396020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.402273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.402317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:24.737 [2024-12-13 23:59:55.402329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:26:24.737 [2024-12-13 23:59:55.402338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.429837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.429888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:24.737 [2024-12-13 23:59:55.429901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.419 ms 00:26:24.737 [2024-12-13 23:59:55.429910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.446802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.446851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:24.737 [2024-12-13 23:59:55.446865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.839 ms 00:26:24.737 [2024-12-13 23:59:55.446881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.737 [2024-12-13 23:59:55.457603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.737 [2024-12-13 23:59:55.457654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:24.737 [2024-12-13 23:59:55.457667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.663 ms 00:26:24.737 [2024-12-13 23:59:55.457677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.012 [2024-12-13 23:59:55.484450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.012 [2024-12-13 23:59:55.484511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:25.013 [2024-12-13 23:59:55.484524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.755 ms 00:26:25.013 [2024-12-13 23:59:55.484532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.013 [2024-12-13 23:59:55.510665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.013 [2024-12-13 23:59:55.510714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:25.013 [2024-12-13 23:59:55.510740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.082 ms 00:26:25.013 [2024-12-13 23:59:55.510748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.013 [2024-12-13 23:59:55.536137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.013 [2024-12-13 23:59:55.536187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.013 [2024-12-13 23:59:55.536200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.339 ms 00:26:25.013 [2024-12-13 23:59:55.536208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.013 [2024-12-13 23:59:55.561605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.013 [2024-12-13 23:59:55.561652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.013 [2024-12-13 23:59:55.561665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.302 ms 00:26:25.013 [2024-12-13 23:59:55.561673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.013 [2024-12-13 23:59:55.561719] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.013 [2024-12-13 23:59:55.561744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:25.013 [2024-12-13 23:59:55.561756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:26:25.013 [2024-12-13 23:59:55.561765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.561995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.013 [2024-12-13 23:59:55.562285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.014 [2024-12-13 23:59:55.562626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.014 [2024-12-13 23:59:55.562635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f2f7aca-291e-475a-a6a4-424021edf1a7 00:26:25.014 [2024-12-13 23:59:55.562644] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:26:25.014 [2024-12-13 23:59:55.562653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:25.014 [2024-12-13 23:59:55.562661] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:25.014 [2024-12-13 23:59:55.562672] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:25.014 [2024-12-13 23:59:55.562681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.014 [2024-12-13 23:59:55.562690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.014 [2024-12-13 23:59:55.562699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.014 [2024-12-13 23:59:55.562714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.014 [2024-12-13 23:59:55.562722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.014 [2024-12-13 23:59:55.562729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.014 [2024-12-13 23:59:55.562737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.014 [2024-12-13 23:59:55.562749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:26:25.014 [2024-12-13 23:59:55.562758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.576538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.014 [2024-12-13 23:59:55.576583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.014 [2024-12-13 23:59:55.576595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.744 ms 00:26:25.014 [2024-12-13 23:59:55.576604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.576844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.014 [2024-12-13 23:59:55.576859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.014 [2024-12-13 23:59:55.576871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:26:25.014 [2024-12-13 23:59:55.576880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.615932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.615982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.014 [2024-12-13 23:59:55.615994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.014 [2024-12-13 23:59:55.616004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.616085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.616096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.014 [2024-12-13 23:59:55.616105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.014 [2024-12-13 23:59:55.616113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.616199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.616213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.014 [2024-12-13 23:59:55.616224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.014 [2024-12-13 23:59:55.616232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.616249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.616263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.014 [2024-12-13 23:59:55.616271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.014 [2024-12-13 23:59:55.616280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.697377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.697431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.014 [2024-12-13 23:59:55.697445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.014 [2024-12-13 23:59:55.697454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.729423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.729478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.014 [2024-12-13 23:59:55.729509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.014 [2024-12-13 23:59:55.729518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.014 [2024-12-13 23:59:55.729585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.014 [2024-12-13 23:59:55.729596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.015 [2024-12-13 23:59:55.729605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.015 [2024-12-13 23:59:55.729614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.015 [2024-12-13 23:59:55.729658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.015 [2024-12-13 23:59:55.729670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.015 [2024-12-13 23:59:55.729684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.015 [2024-12-13 23:59:55.729693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.015 [2024-12-13 23:59:55.729803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.015 [2024-12-13 23:59:55.729816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.015 [2024-12-13 23:59:55.729825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.015 [2024-12-13 23:59:55.729834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.015 [2024-12-13 23:59:55.729866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.015 [2024-12-13 23:59:55.729877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.015 [2024-12-13 23:59:55.729885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.015 [2024-12-13 23:59:55.729897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.015 [2024-12-13 23:59:55.729941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.015 [2024-12-13 23:59:55.729951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.015 [2024-12-13 23:59:55.729960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.015 [2024-12-13 23:59:55.729968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.015 [2024-12-13 23:59:55.730013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.015 [2024-12-13 23:59:55.730024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.015 [2024-12-13 23:59:55.730037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.015 [2024-12-13 23:59:55.730049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.015 [2024-12-13 23:59:55.730182] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.370 ms, result 0 00:26:25.959 00:26:25.959 00:26:25.959 23:59:56 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:28.512 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:28.512 23:59:58 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:28.512 23:59:58 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:28.512 23:59:58 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:28.512 23:59:58 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:28.512 23:59:58 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:28.512 23:59:59 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:28.512 23:59:59 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:28.512 23:59:59 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76126 00:26:28.512 23:59:59 -- common/autotest_common.sh@936 -- # '[' -z 76126 ']' 00:26:28.512 23:59:59 -- common/autotest_common.sh@940 -- # kill -0 76126 00:26:28.512 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76126) - No such process 00:26:28.512 Process with pid 76126 is not found 00:26:28.512 23:59:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76126 is not found' 00:26:28.512 23:59:59 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:28.774 Remove shared memory files 00:26:28.774 23:59:59 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:28.774 23:59:59 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:28.774 23:59:59 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:28.774 23:59:59 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:28.774 23:59:59 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:28.774 23:59:59 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:28.774 23:59:59 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:28.774 00:26:28.774 real 4m5.496s 00:26:28.774 user 4m18.012s 00:26:28.774 sys 0m23.720s 00:26:28.774 ************************************ 00:26:28.774 23:59:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:28.774 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:26:28.774 END TEST ftl_dirty_shutdown 00:26:28.774 ************************************ 00:26:28.774 23:59:59 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:28.774 23:59:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:28.774 23:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:28.774 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:26:28.774 ************************************ 00:26:28.774 START TEST ftl_upgrade_shutdown 00:26:28.774 ************************************ 00:26:28.774 23:59:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:29.037 * Looking for test storage... 00:26:29.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.037 23:59:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:29.037 23:59:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:29.037 23:59:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:29.037 23:59:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:29.037 23:59:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:29.037 23:59:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:29.037 23:59:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:29.037 23:59:59 -- scripts/common.sh@335 -- # IFS=.-: 00:26:29.037 23:59:59 -- scripts/common.sh@335 -- # read -ra ver1 00:26:29.037 23:59:59 -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.037 23:59:59 -- scripts/common.sh@336 -- # read -ra ver2 00:26:29.037 23:59:59 -- scripts/common.sh@337 -- # local 'op=<' 00:26:29.037 23:59:59 -- scripts/common.sh@339 -- # ver1_l=2 00:26:29.037 23:59:59 -- scripts/common.sh@340 -- # ver2_l=1 00:26:29.037 23:59:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:29.037 23:59:59 -- scripts/common.sh@343 -- # case "$op" in 00:26:29.037 23:59:59 -- scripts/common.sh@344 -- # : 1 00:26:29.037 23:59:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:29.037 23:59:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.037 23:59:59 -- scripts/common.sh@364 -- # decimal 1 00:26:29.037 23:59:59 -- scripts/common.sh@352 -- # local d=1 00:26:29.037 23:59:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.037 23:59:59 -- scripts/common.sh@354 -- # echo 1 00:26:29.037 23:59:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:29.037 23:59:59 -- scripts/common.sh@365 -- # decimal 2 00:26:29.037 23:59:59 -- scripts/common.sh@352 -- # local d=2 00:26:29.037 23:59:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.037 23:59:59 -- scripts/common.sh@354 -- # echo 2 00:26:29.037 23:59:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:29.037 23:59:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:29.037 23:59:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:29.037 23:59:59 -- scripts/common.sh@367 -- # return 0 00:26:29.037 23:59:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.037 23:59:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:29.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.038 --rc genhtml_branch_coverage=1 00:26:29.038 --rc genhtml_function_coverage=1 00:26:29.038 --rc genhtml_legend=1 00:26:29.038 --rc geninfo_all_blocks=1 00:26:29.038 --rc geninfo_unexecuted_blocks=1 00:26:29.038 00:26:29.038 ' 00:26:29.038 23:59:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:29.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.038 --rc genhtml_branch_coverage=1 00:26:29.038 --rc genhtml_function_coverage=1 00:26:29.038 --rc genhtml_legend=1 00:26:29.038 --rc geninfo_all_blocks=1 00:26:29.038 --rc geninfo_unexecuted_blocks=1 00:26:29.038 00:26:29.038 ' 00:26:29.038 23:59:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:29.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.038 --rc genhtml_branch_coverage=1 00:26:29.038 --rc genhtml_function_coverage=1 00:26:29.038 --rc genhtml_legend=1 00:26:29.038 --rc geninfo_all_blocks=1 00:26:29.038 --rc geninfo_unexecuted_blocks=1 00:26:29.038 00:26:29.038 ' 00:26:29.038 23:59:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:29.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.038 --rc genhtml_branch_coverage=1 00:26:29.038 --rc genhtml_function_coverage=1 00:26:29.038 --rc genhtml_legend=1 00:26:29.038 --rc geninfo_all_blocks=1 00:26:29.038 --rc geninfo_unexecuted_blocks=1 00:26:29.038 00:26:29.038 ' 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:29.038 23:59:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:29.038 23:59:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.038 23:59:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.038 23:59:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:29.038 23:59:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:29.038 23:59:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.038 23:59:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:29.038 23:59:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:29.038 23:59:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.038 23:59:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.038 23:59:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:29.038 23:59:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:29.038 23:59:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.038 23:59:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.038 23:59:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:29.038 23:59:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:29.038 23:59:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.038 23:59:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.038 23:59:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:29.038 23:59:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:29.038 23:59:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.038 23:59:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.038 23:59:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.038 23:59:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.038 23:59:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:29.038 23:59:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:29.038 23:59:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.038 23:59:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:29.038 23:59:59 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:29.038 23:59:59 -- ftl/common.sh@81 -- # local base_bdev= 00:26:29.038 23:59:59 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:29.038 23:59:59 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:29.038 23:59:59 -- ftl/common.sh@89 -- # spdk_tgt_pid=78799 00:26:29.038 23:59:59 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:29.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.038 23:59:59 -- ftl/common.sh@91 -- # waitforlisten 78799 00:26:29.038 23:59:59 -- common/autotest_common.sh@829 -- # '[' -z 78799 ']' 00:26:29.038 23:59:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.038 23:59:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.038 23:59:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.038 23:59:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.038 23:59:59 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:29.038 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:26:29.038 [2024-12-13 23:59:59.728147] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:29.038 [2024-12-13 23:59:59.728287] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78799 ] 00:26:29.301 [2024-12-13 23:59:59.877632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.562 [2024-12-14 00:00:00.114296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:29.562 [2024-12-14 00:00:00.114541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.953 00:00:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.953 00:00:01 -- common/autotest_common.sh@862 -- # return 0 00:26:30.953 00:00:01 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:30.953 00:00:01 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:30.953 00:00:01 -- ftl/common.sh@99 -- # local params 00:26:30.953 00:00:01 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.953 00:00:01 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:30.953 00:00:01 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.953 00:00:01 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:30.953 00:00:01 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.953 00:00:01 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:30.953 00:00:01 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.953 00:00:01 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:30.953 00:00:01 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.953 00:00:01 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:30.953 00:00:01 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.953 00:00:01 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:30.953 00:00:01 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:30.953 00:00:01 -- ftl/common.sh@54 -- # local name=base 00:26:30.953 00:00:01 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:30.953 00:00:01 -- ftl/common.sh@56 -- # local size=20480 00:26:30.953 00:00:01 -- ftl/common.sh@59 -- # local base_bdev 00:26:30.953 00:00:01 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:30.953 00:00:01 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:30.953 00:00:01 -- ftl/common.sh@62 -- # local base_size 00:26:30.953 00:00:01 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:30.953 00:00:01 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:30.953 00:00:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:30.953 00:00:01 -- common/autotest_common.sh@1369 -- # local bs 00:26:30.953 00:00:01 -- common/autotest_common.sh@1370 -- # local nb 00:26:30.953 00:00:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:31.212 00:00:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:31.212 { 00:26:31.212 "name": "basen1", 00:26:31.212 "aliases": [ 00:26:31.212 "b4299c83-c0cd-4e85-b857-e74544686602" 00:26:31.212 ], 00:26:31.212 "product_name": "NVMe disk", 00:26:31.212 "block_size": 4096, 00:26:31.212 "num_blocks": 1310720, 00:26:31.212 "uuid": "b4299c83-c0cd-4e85-b857-e74544686602", 00:26:31.212 "assigned_rate_limits": { 00:26:31.212 "rw_ios_per_sec": 0, 00:26:31.212 "rw_mbytes_per_sec": 0, 00:26:31.212 "r_mbytes_per_sec": 0, 00:26:31.212 "w_mbytes_per_sec": 0 00:26:31.212 }, 00:26:31.212 "claimed": true, 00:26:31.212 "claim_type": "read_many_write_one", 00:26:31.212 "zoned": false, 00:26:31.212 "supported_io_types": { 00:26:31.212 "read": true, 00:26:31.212 "write": true, 00:26:31.212 "unmap": true, 00:26:31.212 "write_zeroes": true, 00:26:31.212 "flush": true, 00:26:31.212 "reset": true, 00:26:31.212 "compare": true, 00:26:31.212 "compare_and_write": false, 00:26:31.212 "abort": true, 00:26:31.212 "nvme_admin": true, 00:26:31.212 "nvme_io": true 00:26:31.212 }, 00:26:31.212 "driver_specific": { 00:26:31.212 "nvme": [ 00:26:31.212 { 00:26:31.212 "pci_address": "0000:00:07.0", 00:26:31.212 "trid": { 00:26:31.212 "trtype": "PCIe", 00:26:31.212 "traddr": "0000:00:07.0" 00:26:31.212 }, 00:26:31.212 "ctrlr_data": { 00:26:31.212 "cntlid": 0, 00:26:31.212 "vendor_id": "0x1b36", 00:26:31.212 "model_number": "QEMU NVMe Ctrl", 00:26:31.212 "serial_number": "12341", 00:26:31.212 "firmware_revision": "8.0.0", 00:26:31.212 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:31.212 "oacs": { 00:26:31.212 "security": 0, 00:26:31.212 "format": 1, 00:26:31.212 "firmware": 0, 00:26:31.212 "ns_manage": 1 00:26:31.212 }, 00:26:31.212 "multi_ctrlr": false, 00:26:31.212 "ana_reporting": false 00:26:31.212 }, 00:26:31.212 "vs": { 00:26:31.212 "nvme_version": "1.4" 00:26:31.212 }, 00:26:31.212 "ns_data": { 00:26:31.212 "id": 1, 00:26:31.212 "can_share": false 00:26:31.212 } 00:26:31.212 } 00:26:31.212 ], 00:26:31.213 "mp_policy": "active_passive" 00:26:31.213 } 00:26:31.213 } 00:26:31.213 ]' 00:26:31.213 00:00:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:31.213 00:00:01 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:31.213 00:00:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:31.213 00:00:01 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:31.213 00:00:01 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:31.213 00:00:01 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:31.213 00:00:01 -- ftl/common.sh@63 -- # base_size=5120 00:26:31.213 00:00:01 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:31.213 00:00:01 -- ftl/common.sh@67 -- # clear_lvols 00:26:31.213 00:00:01 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:31.213 00:00:01 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:31.471 00:00:01 -- ftl/common.sh@28 -- # stores=69ac1b3b-5ea5-4573-915e-d4e0ff63eaac 00:26:31.471 00:00:01 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:31.471 00:00:01 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69ac1b3b-5ea5-4573-915e-d4e0ff63eaac 00:26:31.471 00:00:02 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:31.729 00:00:02 -- ftl/common.sh@68 -- # lvs=788e2019-85c6-4316-8e13-8193a9d00068 00:26:31.729 00:00:02 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 788e2019-85c6-4316-8e13-8193a9d00068 00:26:31.988 00:00:02 -- ftl/common.sh@107 -- # base_bdev=e69463c3-1d0b-4989-8710-40d5caf7a277 00:26:31.988 00:00:02 -- ftl/common.sh@108 -- # [[ -z e69463c3-1d0b-4989-8710-40d5caf7a277 ]] 00:26:31.988 00:00:02 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 e69463c3-1d0b-4989-8710-40d5caf7a277 5120 00:26:31.988 00:00:02 -- ftl/common.sh@35 -- # local name=cache 00:26:31.988 00:00:02 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:31.988 00:00:02 -- ftl/common.sh@37 -- # local base_bdev=e69463c3-1d0b-4989-8710-40d5caf7a277 00:26:31.988 00:00:02 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:31.988 00:00:02 -- ftl/common.sh@41 -- # get_bdev_size e69463c3-1d0b-4989-8710-40d5caf7a277 00:26:31.988 00:00:02 -- common/autotest_common.sh@1367 -- # local bdev_name=e69463c3-1d0b-4989-8710-40d5caf7a277 00:26:31.988 00:00:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:31.988 00:00:02 -- common/autotest_common.sh@1369 -- # local bs 00:26:31.988 00:00:02 -- common/autotest_common.sh@1370 -- # local nb 00:26:31.988 00:00:02 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e69463c3-1d0b-4989-8710-40d5caf7a277 00:26:32.246 00:00:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:32.246 { 00:26:32.246 "name": "e69463c3-1d0b-4989-8710-40d5caf7a277", 00:26:32.246 "aliases": [ 00:26:32.246 "lvs/basen1p0" 00:26:32.246 ], 00:26:32.246 "product_name": "Logical Volume", 00:26:32.246 "block_size": 4096, 00:26:32.246 "num_blocks": 5242880, 00:26:32.246 "uuid": "e69463c3-1d0b-4989-8710-40d5caf7a277", 00:26:32.246 "assigned_rate_limits": { 00:26:32.246 "rw_ios_per_sec": 0, 00:26:32.246 "rw_mbytes_per_sec": 0, 00:26:32.246 "r_mbytes_per_sec": 0, 00:26:32.246 "w_mbytes_per_sec": 0 00:26:32.246 }, 00:26:32.246 "claimed": false, 00:26:32.246 "zoned": false, 00:26:32.246 "supported_io_types": { 00:26:32.246 "read": true, 00:26:32.246 "write": true, 00:26:32.246 "unmap": true, 00:26:32.246 "write_zeroes": true, 00:26:32.246 "flush": false, 00:26:32.246 "reset": true, 00:26:32.246 "compare": false, 00:26:32.246 "compare_and_write": false, 00:26:32.247 "abort": false, 00:26:32.247 "nvme_admin": false, 00:26:32.247 "nvme_io": false 00:26:32.247 }, 00:26:32.247 "driver_specific": { 00:26:32.247 "lvol": { 00:26:32.247 "lvol_store_uuid": "788e2019-85c6-4316-8e13-8193a9d00068", 00:26:32.247 "base_bdev": "basen1", 00:26:32.247 "thin_provision": true, 00:26:32.247 "snapshot": false, 00:26:32.247 "clone": false, 00:26:32.247 "esnap_clone": false 00:26:32.247 } 00:26:32.247 } 00:26:32.247 } 00:26:32.247 ]' 00:26:32.247 00:00:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:32.247 00:00:02 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:32.247 00:00:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:32.247 00:00:02 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:32.247 00:00:02 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:32.247 00:00:02 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:32.247 00:00:02 -- ftl/common.sh@41 -- # local base_size=1024 00:26:32.247 00:00:02 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:32.247 00:00:02 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:32.504 00:00:03 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:32.504 00:00:03 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:32.504 00:00:03 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:32.763 00:00:03 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:32.763 00:00:03 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:32.763 00:00:03 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d e69463c3-1d0b-4989-8710-40d5caf7a277 -c cachen1p0 --l2p_dram_limit 2 00:26:32.763 [2024-12-14 00:00:03.444981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.763 [2024-12-14 00:00:03.445017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:32.763 [2024-12-14 00:00:03.445030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:32.763 [2024-12-14 00:00:03.445038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.763 [2024-12-14 00:00:03.445073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.763 [2024-12-14 00:00:03.445080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:32.763 [2024-12-14 00:00:03.445088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:32.763 [2024-12-14 00:00:03.445094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.763 [2024-12-14 00:00:03.445109] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:32.763 [2024-12-14 00:00:03.445661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:32.763 [2024-12-14 00:00:03.445683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.763 [2024-12-14 00:00:03.445690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:32.763 [2024-12-14 00:00:03.445700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.575 ms 00:26:32.764 [2024-12-14 00:00:03.445705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.445751] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 052ee666-b6a5-4829-8298-3f5b39348d72 00:26:32.764 [2024-12-14 00:00:03.446669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.446694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:32.764 [2024-12-14 00:00:03.446702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:32.764 [2024-12-14 00:00:03.446709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.451338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.451366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:32.764 [2024-12-14 00:00:03.451374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.595 ms 00:26:32.764 [2024-12-14 00:00:03.451381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.451410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.451418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:32.764 [2024-12-14 00:00:03.451424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:32.764 [2024-12-14 00:00:03.451433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.451457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.451466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:32.764 [2024-12-14 00:00:03.451472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:32.764 [2024-12-14 00:00:03.451488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.451506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:32.764 [2024-12-14 00:00:03.454449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.454473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:32.764 [2024-12-14 00:00:03.454489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.946 ms 00:26:32.764 [2024-12-14 00:00:03.454495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.454517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.454523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:32.764 [2024-12-14 00:00:03.454531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:32.764 [2024-12-14 00:00:03.454536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.454550] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:32.764 [2024-12-14 00:00:03.454634] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:32.764 [2024-12-14 00:00:03.454647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:32.764 [2024-12-14 00:00:03.454655] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:32.764 [2024-12-14 00:00:03.454664] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:32.764 [2024-12-14 00:00:03.454671] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:32.764 [2024-12-14 00:00:03.454680] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:32.764 [2024-12-14 00:00:03.454685] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:32.764 [2024-12-14 00:00:03.454693] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:32.764 [2024-12-14 00:00:03.454699] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:32.764 [2024-12-14 00:00:03.454706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.454716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:32.764 [2024-12-14 00:00:03.454724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:26:32.764 [2024-12-14 00:00:03.454729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.454777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.764 [2024-12-14 00:00:03.454782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:32.764 [2024-12-14 00:00:03.454789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:32.764 [2024-12-14 00:00:03.454797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.764 [2024-12-14 00:00:03.454853] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:32.764 [2024-12-14 00:00:03.454865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:32.764 [2024-12-14 00:00:03.454873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:32.764 [2024-12-14 00:00:03.454879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:32.764 [2024-12-14 00:00:03.454892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:32.764 [2024-12-14 00:00:03.454904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:32.764 [2024-12-14 00:00:03.454911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:32.764 [2024-12-14 00:00:03.454916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:32.764 [2024-12-14 00:00:03.454928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:32.764 [2024-12-14 00:00:03.454936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:32.764 [2024-12-14 00:00:03.454948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:32.764 [2024-12-14 00:00:03.454966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:32.764 [2024-12-14 00:00:03.454972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.454977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:32.764 [2024-12-14 00:00:03.454983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:32.764 [2024-12-14 00:00:03.454989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:32.764 [2024-12-14 00:00:03.454996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:32.764 [2024-12-14 00:00:03.455001] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:32.764 [2024-12-14 00:00:03.455007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:32.764 [2024-12-14 00:00:03.455018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:32.764 [2024-12-14 00:00:03.455023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:32.764 [2024-12-14 00:00:03.455033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:32.764 [2024-12-14 00:00:03.455039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:32.764 [2024-12-14 00:00:03.455051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:32.764 [2024-12-14 00:00:03.455055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:32.764 [2024-12-14 00:00:03.455067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:32.764 [2024-12-14 00:00:03.455074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.455078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:32.764 [2024-12-14 00:00:03.455085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.455096] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:32.764 [2024-12-14 00:00:03.455101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:32.764 [2024-12-14 00:00:03.455108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.764 [2024-12-14 00:00:03.455126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:32.764 [2024-12-14 00:00:03.455132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:32.764 [2024-12-14 00:00:03.455138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:32.764 [2024-12-14 00:00:03.455143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:32.764 [2024-12-14 00:00:03.455150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:32.764 [2024-12-14 00:00:03.455155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:32.764 [2024-12-14 00:00:03.455162] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:32.764 [2024-12-14 00:00:03.455170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.764 [2024-12-14 00:00:03.455178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:32.764 [2024-12-14 00:00:03.455183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:32.764 [2024-12-14 00:00:03.455190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:32.764 [2024-12-14 00:00:03.455195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:32.765 [2024-12-14 00:00:03.455202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:32.765 [2024-12-14 00:00:03.455207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:32.765 [2024-12-14 00:00:03.455214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:32.765 [2024-12-14 00:00:03.455219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:32.765 [2024-12-14 00:00:03.455225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:32.765 [2024-12-14 00:00:03.455230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:32.765 [2024-12-14 00:00:03.455238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:32.765 [2024-12-14 00:00:03.455244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:32.765 [2024-12-14 00:00:03.455253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:32.765 [2024-12-14 00:00:03.455258] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:32.765 [2024-12-14 00:00:03.455266] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.765 [2024-12-14 00:00:03.455271] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:32.765 [2024-12-14 00:00:03.455278] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:32.765 [2024-12-14 00:00:03.455283] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:32.765 [2024-12-14 00:00:03.455290] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:32.765 [2024-12-14 00:00:03.455295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.455302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:32.765 [2024-12-14 00:00:03.455308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:26:32.765 [2024-12-14 00:00:03.455314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.765 [2024-12-14 00:00:03.467066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.467099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:32.765 [2024-12-14 00:00:03.467108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.720 ms 00:26:32.765 [2024-12-14 00:00:03.467115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.765 [2024-12-14 00:00:03.467143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.467153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:32.765 [2024-12-14 00:00:03.467160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:32.765 [2024-12-14 00:00:03.467167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.765 [2024-12-14 00:00:03.491036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.491064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:32.765 [2024-12-14 00:00:03.491073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.837 ms 00:26:32.765 [2024-12-14 00:00:03.491081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.765 [2024-12-14 00:00:03.491104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.491112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:32.765 [2024-12-14 00:00:03.491120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:32.765 [2024-12-14 00:00:03.491128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.765 [2024-12-14 00:00:03.491429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.491452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:32.765 [2024-12-14 00:00:03.491460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:26:32.765 [2024-12-14 00:00:03.491468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.765 [2024-12-14 00:00:03.491513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.765 [2024-12-14 00:00:03.491524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:32.765 [2024-12-14 00:00:03.491530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:32.765 [2024-12-14 00:00:03.491537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.025 [2024-12-14 00:00:03.503558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.025 [2024-12-14 00:00:03.503585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:33.025 [2024-12-14 00:00:03.503592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.008 ms 00:26:33.026 [2024-12-14 00:00:03.503599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.026 [2024-12-14 00:00:03.512548] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:33.026 [2024-12-14 00:00:03.513254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.026 [2024-12-14 00:00:03.513277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:33.026 [2024-12-14 00:00:03.513287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.596 ms 00:26:33.026 [2024-12-14 00:00:03.513292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.026 [2024-12-14 00:00:03.535247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.026 [2024-12-14 00:00:03.535277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:33.026 [2024-12-14 00:00:03.535288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.934 ms 00:26:33.026 [2024-12-14 00:00:03.535294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.026 [2024-12-14 00:00:03.535316] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:33.026 [2024-12-14 00:00:03.535324] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:41.163 [2024-12-14 00:00:10.518888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.518947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:41.163 [2024-12-14 00:00:10.518965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6983.541 ms 00:26:41.163 [2024-12-14 00:00:10.518973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.519062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.519074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:41.163 [2024-12-14 00:00:10.519087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:41.163 [2024-12-14 00:00:10.519095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.543360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.543394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:41.163 [2024-12-14 00:00:10.543408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.221 ms 00:26:41.163 [2024-12-14 00:00:10.543416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.567463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.567501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:41.163 [2024-12-14 00:00:10.567515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.008 ms 00:26:41.163 [2024-12-14 00:00:10.567522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.567829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.567847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:41.163 [2024-12-14 00:00:10.567857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:26:41.163 [2024-12-14 00:00:10.567865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.649883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.649915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:41.163 [2024-12-14 00:00:10.649928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 81.982 ms 00:26:41.163 [2024-12-14 00:00:10.649936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.675000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.675041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:41.163 [2024-12-14 00:00:10.675054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.026 ms 00:26:41.163 [2024-12-14 00:00:10.675062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.676248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.676278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:41.163 [2024-12-14 00:00:10.676290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.143 ms 00:26:41.163 [2024-12-14 00:00:10.676298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.700006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.700036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:41.163 [2024-12-14 00:00:10.700055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.673 ms 00:26:41.163 [2024-12-14 00:00:10.700063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.700103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.700112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:41.163 [2024-12-14 00:00:10.700123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:41.163 [2024-12-14 00:00:10.700130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.700206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.163 [2024-12-14 00:00:10.700217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:41.163 [2024-12-14 00:00:10.700226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:41.163 [2024-12-14 00:00:10.700234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.163 [2024-12-14 00:00:10.701054] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 7255.645 ms, result 0 00:26:41.163 { 00:26:41.163 "name": "ftl", 00:26:41.163 "uuid": "052ee666-b6a5-4829-8298-3f5b39348d72" 00:26:41.163 } 00:26:41.163 00:00:10 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:41.163 [2024-12-14 00:00:10.888516] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.163 00:00:10 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:41.163 00:00:11 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:41.163 [2024-12-14 00:00:11.268881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:41.163 00:00:11 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:41.163 [2024-12-14 00:00:11.449230] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:41.163 00:00:11 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:41.163 Fill FTL, iteration 1 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:41.163 00:00:11 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:41.163 00:00:11 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:41.163 00:00:11 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:41.163 00:00:11 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:41.163 00:00:11 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:41.163 00:00:11 -- ftl/common.sh@163 -- # spdk_ini_pid=78965 00:26:41.163 00:00:11 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:41.163 00:00:11 -- ftl/common.sh@165 -- # waitforlisten 78965 /var/tmp/spdk.tgt.sock 00:26:41.163 00:00:11 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:41.163 00:00:11 -- common/autotest_common.sh@829 -- # '[' -z 78965 ']' 00:26:41.163 00:00:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:41.163 00:00:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.163 00:00:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:41.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:41.163 00:00:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.163 00:00:11 -- common/autotest_common.sh@10 -- # set +x 00:26:41.163 [2024-12-14 00:00:11.858276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:41.163 [2024-12-14 00:00:11.858674] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78965 ] 00:26:41.423 [2024-12-14 00:00:12.008556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.681 [2024-12-14 00:00:12.163151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:41.681 [2024-12-14 00:00:12.163419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.248 00:00:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.248 00:00:12 -- common/autotest_common.sh@862 -- # return 0 00:26:42.248 00:00:12 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:42.248 ftln1 00:26:42.248 00:00:12 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:42.248 00:00:12 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:42.507 00:00:13 -- ftl/common.sh@173 -- # echo ']}' 00:26:42.507 00:00:13 -- ftl/common.sh@176 -- # killprocess 78965 00:26:42.507 00:00:13 -- common/autotest_common.sh@936 -- # '[' -z 78965 ']' 00:26:42.507 00:00:13 -- common/autotest_common.sh@940 -- # kill -0 78965 00:26:42.507 00:00:13 -- common/autotest_common.sh@941 -- # uname 00:26:42.507 00:00:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:42.507 00:00:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78965 00:26:42.507 killing process with pid 78965 00:26:42.507 00:00:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:42.507 00:00:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:42.507 00:00:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78965' 00:26:42.507 00:00:13 -- common/autotest_common.sh@955 -- # kill 78965 00:26:42.507 00:00:13 -- common/autotest_common.sh@960 -- # wait 78965 00:26:43.893 00:00:14 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:43.893 00:00:14 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:43.893 [2024-12-14 00:00:14.347297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:43.893 [2024-12-14 00:00:14.347406] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79005 ] 00:26:43.893 [2024-12-14 00:00:14.494939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.151 [2024-12-14 00:00:14.636780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.526  [2024-12-14T00:00:17.194Z] Copying: 248/1024 [MB] (248 MBps) [2024-12-14T00:00:18.127Z] Copying: 486/1024 [MB] (238 MBps) [2024-12-14T00:00:19.062Z] Copying: 731/1024 [MB] (245 MBps) [2024-12-14T00:00:19.325Z] Copying: 978/1024 [MB] (247 MBps) [2024-12-14T00:00:19.895Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:26:49.163 00:26:49.163 00:00:19 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:49.163 Calculate MD5 checksum, iteration 1 00:26:49.163 00:00:19 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:49.163 00:00:19 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:49.163 00:00:19 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:49.163 00:00:19 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:49.163 00:00:19 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:49.163 00:00:19 -- ftl/common.sh@154 -- # return 0 00:26:49.163 00:00:19 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:49.163 [2024-12-14 00:00:19.857563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:49.163 [2024-12-14 00:00:19.857674] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79059 ] 00:26:49.422 [2024-12-14 00:00:20.006067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.680 [2024-12-14 00:00:20.154143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.054  [2024-12-14T00:00:22.045Z] Copying: 665/1024 [MB] (665 MBps) [2024-12-14T00:00:22.612Z] Copying: 1024/1024 [MB] (average 652 MBps) 00:26:51.880 00:26:52.140 00:00:22 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:52.141 00:00:22 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:54.055 00:00:24 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:54.055 Fill FTL, iteration 2 00:26:54.055 00:00:24 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ec0d642a8d02ab505b54a4c6198d27bc 00:26:54.055 00:00:24 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:54.055 00:00:24 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:54.055 00:00:24 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:54.055 00:00:24 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:54.055 00:00:24 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:54.055 00:00:24 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:54.055 00:00:24 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:54.055 00:00:24 -- ftl/common.sh@154 -- # return 0 00:26:54.055 00:00:24 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:54.055 [2024-12-14 00:00:24.726620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.055 [2024-12-14 00:00:24.726730] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79115 ] 00:26:54.315 [2024-12-14 00:00:24.874997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.315 [2024-12-14 00:00:25.016047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.691  [2024-12-14T00:00:27.398Z] Copying: 252/1024 [MB] (252 MBps) [2024-12-14T00:00:28.332Z] Copying: 504/1024 [MB] (252 MBps) [2024-12-14T00:00:29.707Z] Copying: 754/1024 [MB] (250 MBps) [2024-12-14T00:00:29.707Z] Copying: 990/1024 [MB] (236 MBps) [2024-12-14T00:00:30.273Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:26:59.541 00:26:59.541 Calculate MD5 checksum, iteration 2 00:26:59.541 00:00:30 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:59.541 00:00:30 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:59.541 00:00:30 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:59.541 00:00:30 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:59.541 00:00:30 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:59.541 00:00:30 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:59.541 00:00:30 -- ftl/common.sh@154 -- # return 0 00:26:59.541 00:00:30 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:59.541 [2024-12-14 00:00:30.139947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:59.541 [2024-12-14 00:00:30.140032] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79179 ] 00:26:59.799 [2024-12-14 00:00:30.275720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.799 [2024-12-14 00:00:30.412659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.174  [2024-12-14T00:00:32.472Z] Copying: 641/1024 [MB] (641 MBps) [2024-12-14T00:00:33.853Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:27:03.121 00:27:03.121 00:00:33 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:03.121 00:00:33 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:05.665 00:00:35 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:05.665 00:00:35 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c52cf72a2815a5a6e04f0cb9cc8e59d9 00:27:05.665 00:00:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:05.665 00:00:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:05.665 00:00:35 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:05.665 [2024-12-14 00:00:35.991592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.665 [2024-12-14 00:00:35.991642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:05.665 [2024-12-14 00:00:35.991655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:05.665 [2024-12-14 00:00:35.991666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.665 [2024-12-14 00:00:35.991691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.665 [2024-12-14 00:00:35.991699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:05.665 [2024-12-14 00:00:35.991707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:05.665 [2024-12-14 00:00:35.991715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.665 [2024-12-14 00:00:35.991735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.665 [2024-12-14 00:00:35.991743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:05.665 [2024-12-14 00:00:35.991757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:05.665 [2024-12-14 00:00:35.991765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.665 [2024-12-14 00:00:35.991829] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.225 ms, result 0 00:27:05.665 true 00:27:05.665 00:00:36 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:05.665 { 00:27:05.665 "name": "ftl", 00:27:05.665 "properties": [ 00:27:05.665 { 00:27:05.665 "name": "superblock_version", 00:27:05.665 "value": 5, 00:27:05.665 "read-only": true 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "name": "base_device", 00:27:05.665 "bands": [ 00:27:05.665 { 00:27:05.665 "id": 0, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 1, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 2, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 3, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 4, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 5, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 6, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 7, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 8, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 9, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 10, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 11, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 12, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 13, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 14, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 15, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 16, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 17, 00:27:05.665 "state": "FREE", 00:27:05.665 "validity": 0.0 00:27:05.665 } 00:27:05.665 ], 00:27:05.665 "read-only": true 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "name": "cache_device", 00:27:05.665 "type": "bdev", 00:27:05.665 "chunks": [ 00:27:05.665 { 00:27:05.665 "id": 0, 00:27:05.665 "state": "CLOSED", 00:27:05.665 "utilization": 1.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 1, 00:27:05.665 "state": "CLOSED", 00:27:05.665 "utilization": 1.0 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 2, 00:27:05.665 "state": "OPEN", 00:27:05.665 "utilization": 0.001953125 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "id": 3, 00:27:05.665 "state": "OPEN", 00:27:05.665 "utilization": 0.0 00:27:05.665 } 00:27:05.665 ], 00:27:05.665 "read-only": true 00:27:05.665 }, 00:27:05.665 { 00:27:05.665 "name": "verbose_mode", 00:27:05.665 "value": true, 00:27:05.666 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:05.666 }, 00:27:05.666 { 00:27:05.666 "name": "prep_upgrade_on_shutdown", 00:27:05.666 "value": false, 00:27:05.666 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:05.666 } 00:27:05.666 ] 00:27:05.666 } 00:27:05.666 00:00:36 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:05.666 [2024-12-14 00:00:36.364046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.666 [2024-12-14 00:00:36.364094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:05.666 [2024-12-14 00:00:36.364105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:05.666 [2024-12-14 00:00:36.364113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.666 [2024-12-14 00:00:36.364135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.666 [2024-12-14 00:00:36.364142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:05.666 [2024-12-14 00:00:36.364151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:05.666 [2024-12-14 00:00:36.364159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.666 [2024-12-14 00:00:36.364178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.666 [2024-12-14 00:00:36.364185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:05.666 [2024-12-14 00:00:36.364192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:05.666 [2024-12-14 00:00:36.364199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.666 [2024-12-14 00:00:36.364252] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.195 ms, result 0 00:27:05.666 true 00:27:05.666 00:00:36 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:05.666 00:00:36 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:05.666 00:00:36 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:05.927 00:00:36 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:05.927 00:00:36 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:05.927 00:00:36 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:06.189 [2024-12-14 00:00:36.784569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.189 [2024-12-14 00:00:36.784626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:06.189 [2024-12-14 00:00:36.784638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:06.189 [2024-12-14 00:00:36.784645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.189 [2024-12-14 00:00:36.784668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.189 [2024-12-14 00:00:36.784676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:06.189 [2024-12-14 00:00:36.784684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:06.189 [2024-12-14 00:00:36.784692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.189 [2024-12-14 00:00:36.784713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.189 [2024-12-14 00:00:36.784721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:06.189 [2024-12-14 00:00:36.784729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.189 [2024-12-14 00:00:36.784737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.189 [2024-12-14 00:00:36.784798] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.216 ms, result 0 00:27:06.189 true 00:27:06.189 00:00:36 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:06.451 { 00:27:06.451 "name": "ftl", 00:27:06.451 "properties": [ 00:27:06.451 { 00:27:06.451 "name": "superblock_version", 00:27:06.451 "value": 5, 00:27:06.451 "read-only": true 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "name": "base_device", 00:27:06.451 "bands": [ 00:27:06.451 { 00:27:06.451 "id": 0, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 1, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 2, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 3, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 4, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 5, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 6, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 7, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 8, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 9, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 10, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 11, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 12, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 13, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 14, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 15, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 16, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 17, 00:27:06.451 "state": "FREE", 00:27:06.451 "validity": 0.0 00:27:06.451 } 00:27:06.451 ], 00:27:06.451 "read-only": true 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "name": "cache_device", 00:27:06.451 "type": "bdev", 00:27:06.451 "chunks": [ 00:27:06.451 { 00:27:06.451 "id": 0, 00:27:06.451 "state": "CLOSED", 00:27:06.451 "utilization": 1.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 1, 00:27:06.451 "state": "CLOSED", 00:27:06.451 "utilization": 1.0 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 2, 00:27:06.451 "state": "OPEN", 00:27:06.451 "utilization": 0.001953125 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "id": 3, 00:27:06.451 "state": "OPEN", 00:27:06.451 "utilization": 0.0 00:27:06.451 } 00:27:06.451 ], 00:27:06.451 "read-only": true 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "name": "verbose_mode", 00:27:06.451 "value": true, 00:27:06.451 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:06.451 }, 00:27:06.451 { 00:27:06.451 "name": "prep_upgrade_on_shutdown", 00:27:06.451 "value": true, 00:27:06.451 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:06.451 } 00:27:06.451 ] 00:27:06.451 } 00:27:06.451 00:00:37 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:06.451 00:00:37 -- ftl/common.sh@130 -- # [[ -n 78799 ]] 00:27:06.451 00:00:37 -- ftl/common.sh@131 -- # killprocess 78799 00:27:06.451 00:00:37 -- common/autotest_common.sh@936 -- # '[' -z 78799 ']' 00:27:06.451 00:00:37 -- common/autotest_common.sh@940 -- # kill -0 78799 00:27:06.451 00:00:37 -- common/autotest_common.sh@941 -- # uname 00:27:06.451 00:00:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:06.451 00:00:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78799 00:27:06.451 killing process with pid 78799 00:27:06.451 00:00:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:06.451 00:00:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:06.451 00:00:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78799' 00:27:06.451 00:00:37 -- common/autotest_common.sh@955 -- # kill 78799 00:27:06.451 00:00:37 -- common/autotest_common.sh@960 -- # wait 78799 00:27:07.394 [2024-12-14 00:00:37.787634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:07.394 [2024-12-14 00:00:37.801834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.394 [2024-12-14 00:00:37.801877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:07.394 [2024-12-14 00:00:37.801890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:07.394 [2024-12-14 00:00:37.801898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.394 [2024-12-14 00:00:37.801919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:07.394 [2024-12-14 00:00:37.804594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.394 [2024-12-14 00:00:37.804624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:07.394 [2024-12-14 00:00:37.804635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.661 ms 00:27:07.394 [2024-12-14 00:00:37.804644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:46.984391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:46.984517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:17.407 [2024-12-14 00:00:46.984539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9179.685 ms 00:27:17.407 [2024-12-14 00:00:46.984550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:46.986746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:46.986801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:17.407 [2024-12-14 00:00:46.986814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.171 ms 00:27:17.407 [2024-12-14 00:00:46.986822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:46.987954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:46.987980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:17.407 [2024-12-14 00:00:46.987992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:27:17.407 [2024-12-14 00:00:46.988000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:46.999712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:46.999759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:17.407 [2024-12-14 00:00:46.999770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.655 ms 00:27:17.407 [2024-12-14 00:00:46.999778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:47.007636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:47.007690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:17.407 [2024-12-14 00:00:47.007703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.809 ms 00:27:17.407 [2024-12-14 00:00:47.007711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:47.007828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:47.007841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:17.407 [2024-12-14 00:00:47.007851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:27:17.407 [2024-12-14 00:00:47.007867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:47.018737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:47.018785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:17.407 [2024-12-14 00:00:47.018796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.852 ms 00:27:17.407 [2024-12-14 00:00:47.018803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.407 [2024-12-14 00:00:47.029713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.407 [2024-12-14 00:00:47.029761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:17.407 [2024-12-14 00:00:47.029771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.863 ms 00:27:17.407 [2024-12-14 00:00:47.029778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.040286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.408 [2024-12-14 00:00:47.040332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:17.408 [2024-12-14 00:00:47.040343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.463 ms 00:27:17.408 [2024-12-14 00:00:47.040350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.050849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.408 [2024-12-14 00:00:47.050898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:17.408 [2024-12-14 00:00:47.050908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.401 ms 00:27:17.408 [2024-12-14 00:00:47.050915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.050960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:17.408 [2024-12-14 00:00:47.050976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:17.408 [2024-12-14 00:00:47.050987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:17.408 [2024-12-14 00:00:47.050995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:17.408 [2024-12-14 00:00:47.051004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:17.408 [2024-12-14 00:00:47.051135] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:17.408 [2024-12-14 00:00:47.051144] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 052ee666-b6a5-4829-8298-3f5b39348d72 00:27:17.408 [2024-12-14 00:00:47.051152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:17.408 [2024-12-14 00:00:47.051159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:17.408 [2024-12-14 00:00:47.051167] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:17.408 [2024-12-14 00:00:47.051175] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:17.408 [2024-12-14 00:00:47.051183] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:17.408 [2024-12-14 00:00:47.051194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:17.408 [2024-12-14 00:00:47.051206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:17.408 [2024-12-14 00:00:47.051212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:17.408 [2024-12-14 00:00:47.051218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:17.408 [2024-12-14 00:00:47.051227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.408 [2024-12-14 00:00:47.051235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:17.408 [2024-12-14 00:00:47.051244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:27:17.408 [2024-12-14 00:00:47.051252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.065410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.408 [2024-12-14 00:00:47.065457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:17.408 [2024-12-14 00:00:47.065470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.125 ms 00:27:17.408 [2024-12-14 00:00:47.065495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.065725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.408 [2024-12-14 00:00:47.065735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:17.408 [2024-12-14 00:00:47.065744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:27:17.408 [2024-12-14 00:00:47.065751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.115207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.115267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:17.408 [2024-12-14 00:00:47.115280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.115295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.115339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.115348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:17.408 [2024-12-14 00:00:47.115357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.115364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.115441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.115453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:17.408 [2024-12-14 00:00:47.115462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.115470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.115508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.115518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:17.408 [2024-12-14 00:00:47.115527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.115534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.198632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.198696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:17.408 [2024-12-14 00:00:47.198708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.198718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:17.408 [2024-12-14 00:00:47.231449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.231457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:17.408 [2024-12-14 00:00:47.231569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.231578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:17.408 [2024-12-14 00:00:47.231648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.231656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:17.408 [2024-12-14 00:00:47.231780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.231789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:17.408 [2024-12-14 00:00:47.231841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.231850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:17.408 [2024-12-14 00:00:47.231912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.231921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.231974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.408 [2024-12-14 00:00:47.231988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:17.408 [2024-12-14 00:00:47.231997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.408 [2024-12-14 00:00:47.232004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.408 [2024-12-14 00:00:47.232161] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9430.255 ms, result 0 00:27:22.700 00:00:52 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:22.700 00:00:52 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:22.700 00:00:52 -- ftl/common.sh@81 -- # local base_bdev= 00:27:22.700 00:00:52 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:22.700 00:00:52 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:22.700 00:00:52 -- ftl/common.sh@89 -- # spdk_tgt_pid=79404 00:27:22.700 00:00:52 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:22.700 00:00:52 -- ftl/common.sh@91 -- # waitforlisten 79404 00:27:22.700 00:00:52 -- common/autotest_common.sh@829 -- # '[' -z 79404 ']' 00:27:22.700 00:00:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.700 00:00:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.700 00:00:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.700 00:00:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.700 00:00:52 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.700 00:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:22.700 [2024-12-14 00:00:52.719455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:22.700 [2024-12-14 00:00:52.719620] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79404 ] 00:27:22.700 [2024-12-14 00:00:52.875941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.700 [2024-12-14 00:00:53.113029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:22.700 [2024-12-14 00:00:53.113259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.271 [2024-12-14 00:00:53.844126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:23.271 [2024-12-14 00:00:53.844210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:23.271 [2024-12-14 00:00:53.991272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.271 [2024-12-14 00:00:53.991338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:23.271 [2024-12-14 00:00:53.991353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:23.271 [2024-12-14 00:00:53.991361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.271 [2024-12-14 00:00:53.991427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.271 [2024-12-14 00:00:53.991441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:23.271 [2024-12-14 00:00:53.991450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:23.271 [2024-12-14 00:00:53.991458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.271 [2024-12-14 00:00:53.991499] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:23.271 [2024-12-14 00:00:53.992274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:23.271 [2024-12-14 00:00:53.992302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.271 [2024-12-14 00:00:53.992311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:23.271 [2024-12-14 00:00:53.992320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:27:23.271 [2024-12-14 00:00:53.992329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.271 [2024-12-14 00:00:53.994113] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:23.532 [2024-12-14 00:00:54.008872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.532 [2024-12-14 00:00:54.008929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:23.532 [2024-12-14 00:00:54.008942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.761 ms 00:27:23.532 [2024-12-14 00:00:54.008950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.532 [2024-12-14 00:00:54.009042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.532 [2024-12-14 00:00:54.009052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:23.532 [2024-12-14 00:00:54.009061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:23.532 [2024-12-14 00:00:54.009070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.532 [2024-12-14 00:00:54.017655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.532 [2024-12-14 00:00:54.017700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:23.532 [2024-12-14 00:00:54.017710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.497 ms 00:27:23.532 [2024-12-14 00:00:54.017725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.532 [2024-12-14 00:00:54.017772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.017782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:23.533 [2024-12-14 00:00:54.017790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:23.533 [2024-12-14 00:00:54.017798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.017846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.017855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:23.533 [2024-12-14 00:00:54.017865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:23.533 [2024-12-14 00:00:54.017874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.017905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:23.533 [2024-12-14 00:00:54.022311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.022355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:23.533 [2024-12-14 00:00:54.022369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.417 ms 00:27:23.533 [2024-12-14 00:00:54.022377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.022419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.022429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:23.533 [2024-12-14 00:00:54.022438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:23.533 [2024-12-14 00:00:54.022445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.022516] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:23.533 [2024-12-14 00:00:54.022542] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:23.533 [2024-12-14 00:00:54.022579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:23.533 [2024-12-14 00:00:54.022599] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:23.533 [2024-12-14 00:00:54.022676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:23.533 [2024-12-14 00:00:54.022687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:23.533 [2024-12-14 00:00:54.022698] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:23.533 [2024-12-14 00:00:54.022708] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:23.533 [2024-12-14 00:00:54.022718] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:23.533 [2024-12-14 00:00:54.022727] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:23.533 [2024-12-14 00:00:54.022739] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:23.533 [2024-12-14 00:00:54.022746] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:23.533 [2024-12-14 00:00:54.022756] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:23.533 [2024-12-14 00:00:54.022765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.022774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:23.533 [2024-12-14 00:00:54.022783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:27:23.533 [2024-12-14 00:00:54.022791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.022856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.022866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:23.533 [2024-12-14 00:00:54.022874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:23.533 [2024-12-14 00:00:54.022882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.022959] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:23.533 [2024-12-14 00:00:54.022974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:23.533 [2024-12-14 00:00:54.022982] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:23.533 [2024-12-14 00:00:54.022992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:23.533 [2024-12-14 00:00:54.023008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:23.533 [2024-12-14 00:00:54.023022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:23.533 [2024-12-14 00:00:54.023032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:23.533 [2024-12-14 00:00:54.023040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:23.533 [2024-12-14 00:00:54.023055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:23.533 [2024-12-14 00:00:54.023062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:23.533 [2024-12-14 00:00:54.023080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:23.533 [2024-12-14 00:00:54.023102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:23.533 [2024-12-14 00:00:54.023109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:23.533 [2024-12-14 00:00:54.023123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:23.533 [2024-12-14 00:00:54.023130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:23.533 [2024-12-14 00:00:54.023144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:23.533 [2024-12-14 00:00:54.023151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:23.533 [2024-12-14 00:00:54.023166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:23.533 [2024-12-14 00:00:54.023172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:23.533 [2024-12-14 00:00:54.023187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:23.533 [2024-12-14 00:00:54.023193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:23.533 [2024-12-14 00:00:54.023208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:23.533 [2024-12-14 00:00:54.023215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:23.533 [2024-12-14 00:00:54.023232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:23.533 [2024-12-14 00:00:54.023239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:23.533 [2024-12-14 00:00:54.023253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023265] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:23.533 [2024-12-14 00:00:54.023273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:23.533 [2024-12-14 00:00:54.023281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.533 [2024-12-14 00:00:54.023297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:23.533 [2024-12-14 00:00:54.023309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:23.533 [2024-12-14 00:00:54.023316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:23.533 [2024-12-14 00:00:54.023325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:23.533 [2024-12-14 00:00:54.023332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:23.533 [2024-12-14 00:00:54.023339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:23.533 [2024-12-14 00:00:54.023347] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:23.533 [2024-12-14 00:00:54.023357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:23.533 [2024-12-14 00:00:54.023379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:23.533 [2024-12-14 00:00:54.023404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:23.533 [2024-12-14 00:00:54.023422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:23.533 [2024-12-14 00:00:54.023430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:23.533 [2024-12-14 00:00:54.023441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:23.533 [2024-12-14 00:00:54.023498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:23.533 [2024-12-14 00:00:54.023506] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:23.533 [2024-12-14 00:00:54.023516] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023525] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:23.533 [2024-12-14 00:00:54.023532] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:23.533 [2024-12-14 00:00:54.023540] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:23.533 [2024-12-14 00:00:54.023551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:23.533 [2024-12-14 00:00:54.023559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.023568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:23.533 [2024-12-14 00:00:54.023576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:27:23.533 [2024-12-14 00:00:54.023584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.042179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.042234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:23.533 [2024-12-14 00:00:54.042246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.545 ms 00:27:23.533 [2024-12-14 00:00:54.042254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.042302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.042310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:23.533 [2024-12-14 00:00:54.042318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:23.533 [2024-12-14 00:00:54.042333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.078111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.078163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:23.533 [2024-12-14 00:00:54.078174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.721 ms 00:27:23.533 [2024-12-14 00:00:54.078183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.078226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.078235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:23.533 [2024-12-14 00:00:54.078244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:23.533 [2024-12-14 00:00:54.078251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.078887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.078937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:23.533 [2024-12-14 00:00:54.078949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:27:23.533 [2024-12-14 00:00:54.078957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.079006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.079014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:23.533 [2024-12-14 00:00:54.079022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:23.533 [2024-12-14 00:00:54.079030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.098342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.098393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:23.533 [2024-12-14 00:00:54.098405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.285 ms 00:27:23.533 [2024-12-14 00:00:54.098414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.113281] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:23.533 [2024-12-14 00:00:54.113338] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:23.533 [2024-12-14 00:00:54.113350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.113358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:23.533 [2024-12-14 00:00:54.113369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.796 ms 00:27:23.533 [2024-12-14 00:00:54.113386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.128927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.128982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:23.533 [2024-12-14 00:00:54.128993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.478 ms 00:27:23.533 [2024-12-14 00:00:54.129002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.142048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.142101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:23.533 [2024-12-14 00:00:54.142113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.977 ms 00:27:23.533 [2024-12-14 00:00:54.142120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.155178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.155232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:23.533 [2024-12-14 00:00:54.155244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.002 ms 00:27:23.533 [2024-12-14 00:00:54.155252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.155689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.155712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:23.533 [2024-12-14 00:00:54.155722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:27:23.533 [2024-12-14 00:00:54.155730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.225682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.225748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:23.533 [2024-12-14 00:00:54.225762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 69.930 ms 00:27:23.533 [2024-12-14 00:00:54.225772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.237497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:23.533 [2024-12-14 00:00:54.238623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.238667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:23.533 [2024-12-14 00:00:54.238687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.776 ms 00:27:23.533 [2024-12-14 00:00:54.238696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.238778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.238790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:23.533 [2024-12-14 00:00:54.238800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:23.533 [2024-12-14 00:00:54.238809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.238868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.238879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:23.533 [2024-12-14 00:00:54.238888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:23.533 [2024-12-14 00:00:54.238900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.240460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.240531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:23.533 [2024-12-14 00:00:54.240543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.535 ms 00:27:23.533 [2024-12-14 00:00:54.240551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.240594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.240603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:23.533 [2024-12-14 00:00:54.240613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:23.533 [2024-12-14 00:00:54.240621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.533 [2024-12-14 00:00:54.240663] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:23.533 [2024-12-14 00:00:54.240676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.533 [2024-12-14 00:00:54.240686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:23.533 [2024-12-14 00:00:54.240694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:23.533 [2024-12-14 00:00:54.240702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.794 [2024-12-14 00:00:54.267984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.794 [2024-12-14 00:00:54.268038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:23.794 [2024-12-14 00:00:54.268052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.261 ms 00:27:23.794 [2024-12-14 00:00:54.268069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.794 [2024-12-14 00:00:54.268173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.794 [2024-12-14 00:00:54.268185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:23.794 [2024-12-14 00:00:54.268195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:23.794 [2024-12-14 00:00:54.268204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.794 [2024-12-14 00:00:54.269563] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 277.796 ms, result 0 00:27:23.794 [2024-12-14 00:00:54.284414] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.794 [2024-12-14 00:00:54.300425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:23.794 [2024-12-14 00:00:54.308751] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:24.366 00:00:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.366 00:00:54 -- common/autotest_common.sh@862 -- # return 0 00:27:24.366 00:00:54 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:24.366 00:00:54 -- ftl/common.sh@95 -- # return 0 00:27:24.366 00:00:54 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:24.626 [2024-12-14 00:00:55.126381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.626 [2024-12-14 00:00:55.126448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:24.626 [2024-12-14 00:00:55.126463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:24.626 [2024-12-14 00:00:55.126473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.626 [2024-12-14 00:00:55.126515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.626 [2024-12-14 00:00:55.126525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:24.626 [2024-12-14 00:00:55.126538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:24.626 [2024-12-14 00:00:55.126546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.626 [2024-12-14 00:00:55.126568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.626 [2024-12-14 00:00:55.126578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:24.626 [2024-12-14 00:00:55.126586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:24.626 [2024-12-14 00:00:55.126595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.626 [2024-12-14 00:00:55.126660] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.271 ms, result 0 00:27:24.626 true 00:27:24.626 00:00:55 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.626 { 00:27:24.626 "name": "ftl", 00:27:24.626 "properties": [ 00:27:24.626 { 00:27:24.626 "name": "superblock_version", 00:27:24.626 "value": 5, 00:27:24.626 "read-only": true 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "name": "base_device", 00:27:24.626 "bands": [ 00:27:24.626 { 00:27:24.626 "id": 0, 00:27:24.626 "state": "CLOSED", 00:27:24.626 "validity": 1.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 1, 00:27:24.626 "state": "CLOSED", 00:27:24.626 "validity": 1.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 2, 00:27:24.626 "state": "CLOSED", 00:27:24.626 "validity": 0.007843137254901933 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 3, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 4, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 5, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 6, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 7, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 8, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 9, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 10, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 11, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 12, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 13, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 14, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 15, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 16, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 17, 00:27:24.626 "state": "FREE", 00:27:24.626 "validity": 0.0 00:27:24.626 } 00:27:24.626 ], 00:27:24.626 "read-only": true 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "name": "cache_device", 00:27:24.626 "type": "bdev", 00:27:24.626 "chunks": [ 00:27:24.626 { 00:27:24.626 "id": 0, 00:27:24.626 "state": "OPEN", 00:27:24.626 "utilization": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 1, 00:27:24.626 "state": "OPEN", 00:27:24.626 "utilization": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 2, 00:27:24.626 "state": "FREE", 00:27:24.626 "utilization": 0.0 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "id": 3, 00:27:24.626 "state": "FREE", 00:27:24.626 "utilization": 0.0 00:27:24.626 } 00:27:24.626 ], 00:27:24.626 "read-only": true 00:27:24.626 }, 00:27:24.626 { 00:27:24.626 "name": "verbose_mode", 00:27:24.626 "value": true, 00:27:24.626 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:24.627 }, 00:27:24.627 { 00:27:24.627 "name": "prep_upgrade_on_shutdown", 00:27:24.627 "value": false, 00:27:24.627 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:24.627 } 00:27:24.627 ] 00:27:24.627 } 00:27:24.627 00:00:55 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:24.627 00:00:55 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:24.627 00:00:55 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.888 00:00:55 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:24.888 00:00:55 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:24.888 00:00:55 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:24.888 00:00:55 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.888 00:00:55 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:25.148 Validate MD5 checksum, iteration 1 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:25.148 00:00:55 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:25.148 00:00:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.148 00:00:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.148 00:00:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.148 00:00:55 -- ftl/common.sh@154 -- # return 0 00:27:25.148 00:00:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:25.148 [2024-12-14 00:00:55.829722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:25.148 [2024-12-14 00:00:55.829869] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79456 ] 00:27:25.406 [2024-12-14 00:00:55.987804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.665 [2024-12-14 00:00:56.157341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.046  [2024-12-14T00:00:58.349Z] Copying: 652/1024 [MB] (652 MBps) [2024-12-14T00:00:59.284Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:27:28.552 00:27:28.810 00:00:59 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:28.810 00:00:59 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:30.710 Validate MD5 checksum, iteration 2 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@103 -- # sum=ec0d642a8d02ab505b54a4c6198d27bc 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@105 -- # [[ ec0d642a8d02ab505b54a4c6198d27bc != \e\c\0\d\6\4\2\a\8\d\0\2\a\b\5\0\5\b\5\4\a\4\c\6\1\9\8\d\2\7\b\c ]] 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:30.710 00:01:01 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.710 00:01:01 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:30.710 00:01:01 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:30.710 00:01:01 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:30.710 00:01:01 -- ftl/common.sh@154 -- # return 0 00:27:30.711 00:01:01 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.711 [2024-12-14 00:01:01.375290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:30.711 [2024-12-14 00:01:01.375392] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79518 ] 00:27:30.969 [2024-12-14 00:01:01.534016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.969 [2024-12-14 00:01:01.673051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.876  [2024-12-14T00:01:03.608Z] Copying: 728/1024 [MB] (728 MBps) [2024-12-14T00:01:06.926Z] Copying: 1024/1024 [MB] (average 714 MBps) 00:27:36.194 00:27:36.194 00:01:06 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:36.194 00:01:06 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@103 -- # sum=c52cf72a2815a5a6e04f0cb9cc8e59d9 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@105 -- # [[ c52cf72a2815a5a6e04f0cb9cc8e59d9 != \c\5\2\c\f\7\2\a\2\8\1\5\a\5\a\6\e\0\4\f\0\c\b\9\c\c\8\e\5\9\d\9 ]] 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:38.092 00:01:08 -- ftl/common.sh@137 -- # [[ -n 79404 ]] 00:27:38.092 00:01:08 -- ftl/common.sh@138 -- # kill -9 79404 00:27:38.092 00:01:08 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:38.092 00:01:08 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:38.092 00:01:08 -- ftl/common.sh@81 -- # local base_bdev= 00:27:38.092 00:01:08 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:38.092 00:01:08 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.092 00:01:08 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:38.092 00:01:08 -- ftl/common.sh@89 -- # spdk_tgt_pid=79596 00:27:38.092 00:01:08 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:38.092 00:01:08 -- ftl/common.sh@91 -- # waitforlisten 79596 00:27:38.092 00:01:08 -- common/autotest_common.sh@829 -- # '[' -z 79596 ']' 00:27:38.092 00:01:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.092 00:01:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.092 00:01:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.092 00:01:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.092 00:01:08 -- common/autotest_common.sh@10 -- # set +x 00:27:38.092 [2024-12-14 00:01:08.386705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:38.092 [2024-12-14 00:01:08.386811] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79596 ] 00:27:38.092 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79404 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:38.092 [2024-12-14 00:01:08.532634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.092 [2024-12-14 00:01:08.671985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:38.092 [2024-12-14 00:01:08.672140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.661 [2024-12-14 00:01:09.198885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:38.661 [2024-12-14 00:01:09.198933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:38.661 [2024-12-14 00:01:09.335087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.335260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:38.661 [2024-12-14 00:01:09.335279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:38.661 [2024-12-14 00:01:09.335287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.661 [2024-12-14 00:01:09.335348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.335360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:38.661 [2024-12-14 00:01:09.335368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:38.661 [2024-12-14 00:01:09.335375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.661 [2024-12-14 00:01:09.335396] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:38.661 [2024-12-14 00:01:09.336141] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:38.661 [2024-12-14 00:01:09.336159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.336167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:38.661 [2024-12-14 00:01:09.336175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.767 ms 00:27:38.661 [2024-12-14 00:01:09.336182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.661 [2024-12-14 00:01:09.336429] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:38.661 [2024-12-14 00:01:09.352954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.352989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:38.661 [2024-12-14 00:01:09.353000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.525 ms 00:27:38.661 [2024-12-14 00:01:09.353008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.661 [2024-12-14 00:01:09.361746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.361778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:38.661 [2024-12-14 00:01:09.361787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:38.661 [2024-12-14 00:01:09.361794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.661 [2024-12-14 00:01:09.362093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.362104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:38.661 [2024-12-14 00:01:09.362112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:27:38.661 [2024-12-14 00:01:09.362119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.661 [2024-12-14 00:01:09.362150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.661 [2024-12-14 00:01:09.362158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:38.661 [2024-12-14 00:01:09.362165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:38.662 [2024-12-14 00:01:09.362174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.662 [2024-12-14 00:01:09.362198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.662 [2024-12-14 00:01:09.362206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:38.662 [2024-12-14 00:01:09.362213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:38.662 [2024-12-14 00:01:09.362221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.662 [2024-12-14 00:01:09.362244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:38.662 [2024-12-14 00:01:09.365299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.662 [2024-12-14 00:01:09.365424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:38.662 [2024-12-14 00:01:09.365440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.063 ms 00:27:38.662 [2024-12-14 00:01:09.365447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.662 [2024-12-14 00:01:09.365491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.662 [2024-12-14 00:01:09.365500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:38.662 [2024-12-14 00:01:09.365511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:38.662 [2024-12-14 00:01:09.365517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.662 [2024-12-14 00:01:09.365537] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:38.662 [2024-12-14 00:01:09.365554] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:38.662 [2024-12-14 00:01:09.365585] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:38.662 [2024-12-14 00:01:09.365599] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:38.662 [2024-12-14 00:01:09.365671] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:38.662 [2024-12-14 00:01:09.365682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:38.662 [2024-12-14 00:01:09.365694] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:38.662 [2024-12-14 00:01:09.365704] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:38.662 [2024-12-14 00:01:09.365712] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:38.662 [2024-12-14 00:01:09.365720] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:38.662 [2024-12-14 00:01:09.365727] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:38.662 [2024-12-14 00:01:09.365733] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:38.662 [2024-12-14 00:01:09.365740] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:38.662 [2024-12-14 00:01:09.365747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.662 [2024-12-14 00:01:09.365754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:38.662 [2024-12-14 00:01:09.365761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:27:38.662 [2024-12-14 00:01:09.365770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.662 [2024-12-14 00:01:09.365830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.662 [2024-12-14 00:01:09.365837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:38.662 [2024-12-14 00:01:09.365844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:38.662 [2024-12-14 00:01:09.365851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.662 [2024-12-14 00:01:09.365933] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:38.662 [2024-12-14 00:01:09.365942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:38.662 [2024-12-14 00:01:09.365950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:38.662 [2024-12-14 00:01:09.365957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.365967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:38.662 [2024-12-14 00:01:09.365974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.365981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:38.662 [2024-12-14 00:01:09.365987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:38.662 [2024-12-14 00:01:09.365994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:38.662 [2024-12-14 00:01:09.366001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:38.662 [2024-12-14 00:01:09.366014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:38.662 [2024-12-14 00:01:09.366020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:38.662 [2024-12-14 00:01:09.366033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:38.662 [2024-12-14 00:01:09.366052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:38.662 [2024-12-14 00:01:09.366058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:38.662 [2024-12-14 00:01:09.366070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:38.662 [2024-12-14 00:01:09.366078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:38.662 [2024-12-14 00:01:09.366090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:38.662 [2024-12-14 00:01:09.366096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:38.662 [2024-12-14 00:01:09.366109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:38.662 [2024-12-14 00:01:09.366115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:38.662 [2024-12-14 00:01:09.366127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:38.662 [2024-12-14 00:01:09.366133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:38.662 [2024-12-14 00:01:09.366146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:38.662 [2024-12-14 00:01:09.366152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:38.662 [2024-12-14 00:01:09.366164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:38.662 [2024-12-14 00:01:09.366172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:38.662 [2024-12-14 00:01:09.366184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366197] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:38.662 [2024-12-14 00:01:09.366204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:38.662 [2024-12-14 00:01:09.366210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:38.662 [2024-12-14 00:01:09.366224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:38.662 [2024-12-14 00:01:09.366231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:38.662 [2024-12-14 00:01:09.366237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:38.662 [2024-12-14 00:01:09.366243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:38.662 [2024-12-14 00:01:09.366249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:38.662 [2024-12-14 00:01:09.366256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:38.662 [2024-12-14 00:01:09.366263] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:38.662 [2024-12-14 00:01:09.366272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:38.662 [2024-12-14 00:01:09.366288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:38.662 [2024-12-14 00:01:09.366315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:38.662 [2024-12-14 00:01:09.366322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:38.662 [2024-12-14 00:01:09.366329] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:38.662 [2024-12-14 00:01:09.366336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:38.662 [2024-12-14 00:01:09.366365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:38.662 [2024-12-14 00:01:09.366376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:38.662 [2024-12-14 00:01:09.366382] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:38.662 [2024-12-14 00:01:09.366390] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.663 [2024-12-14 00:01:09.366398] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.663 [2024-12-14 00:01:09.366409] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:38.663 [2024-12-14 00:01:09.366416] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:38.663 [2024-12-14 00:01:09.366422] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:38.663 [2024-12-14 00:01:09.366430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.663 [2024-12-14 00:01:09.366437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:38.663 [2024-12-14 00:01:09.366444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:27:38.663 [2024-12-14 00:01:09.366453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.663 [2024-12-14 00:01:09.380014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.663 [2024-12-14 00:01:09.380154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:38.663 [2024-12-14 00:01:09.380174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.504 ms 00:27:38.663 [2024-12-14 00:01:09.380182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.663 [2024-12-14 00:01:09.380219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.663 [2024-12-14 00:01:09.380228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:38.663 [2024-12-14 00:01:09.380235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:38.663 [2024-12-14 00:01:09.380242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.411220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.411351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:38.925 [2024-12-14 00:01:09.411367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.932 ms 00:27:38.925 [2024-12-14 00:01:09.411375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.411405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.411413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:38.925 [2024-12-14 00:01:09.411421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:38.925 [2024-12-14 00:01:09.411428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.411530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.411542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:38.925 [2024-12-14 00:01:09.411550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:38.925 [2024-12-14 00:01:09.411558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.411595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.411606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:38.925 [2024-12-14 00:01:09.411615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:38.925 [2024-12-14 00:01:09.411623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.427058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.427092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:38.925 [2024-12-14 00:01:09.427102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.412 ms 00:27:38.925 [2024-12-14 00:01:09.427109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.427201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.427210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:38.925 [2024-12-14 00:01:09.427219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:38.925 [2024-12-14 00:01:09.427226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.444260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.444297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:38.925 [2024-12-14 00:01:09.444307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.016 ms 00:27:38.925 [2024-12-14 00:01:09.444315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.453710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.453838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:38.925 [2024-12-14 00:01:09.453853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:27:38.925 [2024-12-14 00:01:09.453861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.514906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.514951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:38.925 [2024-12-14 00:01:09.514963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 60.993 ms 00:27:38.925 [2024-12-14 00:01:09.514971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.515059] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:38.925 [2024-12-14 00:01:09.515100] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:38.925 [2024-12-14 00:01:09.515138] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:38.925 [2024-12-14 00:01:09.515177] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:38.925 [2024-12-14 00:01:09.515185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.515198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:38.925 [2024-12-14 00:01:09.515209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:27:38.925 [2024-12-14 00:01:09.515219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.515270] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:38.925 [2024-12-14 00:01:09.515281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.515289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:38.925 [2024-12-14 00:01:09.515297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:38.925 [2024-12-14 00:01:09.515304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.531166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.531203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:38.925 [2024-12-14 00:01:09.531214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.839 ms 00:27:38.925 [2024-12-14 00:01:09.531221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.540044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.540079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:38.925 [2024-12-14 00:01:09.540089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:38.925 [2024-12-14 00:01:09.540116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.540170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.925 [2024-12-14 00:01:09.540179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:38.925 [2024-12-14 00:01:09.540187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.925 [2024-12-14 00:01:09.540194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.925 [2024-12-14 00:01:09.540343] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:39.862 [2024-12-14 00:01:10.259136] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:39.862 [2024-12-14 00:01:10.259286] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:40.432 [2024-12-14 00:01:10.991561] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:40.432 [2024-12-14 00:01:10.991634] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:40.432 [2024-12-14 00:01:10.991644] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:40.432 [2024-12-14 00:01:10.991653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:10.991661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:40.432 [2024-12-14 00:01:10.991671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1451.434 ms 00:27:40.432 [2024-12-14 00:01:10.991677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:10.991710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:10.991716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:40.432 [2024-12-14 00:01:10.991723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:40.432 [2024-12-14 00:01:10.991729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.000392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:40.432 [2024-12-14 00:01:11.000492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.000501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:40.432 [2024-12-14 00:01:11.000509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.738 ms 00:27:40.432 [2024-12-14 00:01:11.000515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.001027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.001042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:40.432 [2024-12-14 00:01:11.001049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.459 ms 00:27:40.432 [2024-12-14 00:01:11.001055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.002772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.002789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:40.432 [2024-12-14 00:01:11.002797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.705 ms 00:27:40.432 [2024-12-14 00:01:11.002803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.022232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.022358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:40.432 [2024-12-14 00:01:11.022373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.411 ms 00:27:40.432 [2024-12-14 00:01:11.022379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.022454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.022463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:40.432 [2024-12-14 00:01:11.022470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:40.432 [2024-12-14 00:01:11.022476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.023447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.023477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:40.432 [2024-12-14 00:01:11.023493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.943 ms 00:27:40.432 [2024-12-14 00:01:11.023499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.023519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.023525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:40.432 [2024-12-14 00:01:11.023531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:40.432 [2024-12-14 00:01:11.023537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.432 [2024-12-14 00:01:11.023564] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:40.432 [2024-12-14 00:01:11.023571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.432 [2024-12-14 00:01:11.023577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:40.433 [2024-12-14 00:01:11.023586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:40.433 [2024-12-14 00:01:11.023592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.433 [2024-12-14 00:01:11.023634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.433 [2024-12-14 00:01:11.023641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:40.433 [2024-12-14 00:01:11.023646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:40.433 [2024-12-14 00:01:11.023652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.433 [2024-12-14 00:01:11.024346] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1688.932 ms, result 0 00:27:40.433 [2024-12-14 00:01:11.038596] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.433 [2024-12-14 00:01:11.054598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:40.433 [2024-12-14 00:01:11.062688] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:40.691 Validate MD5 checksum, iteration 1 00:27:40.691 00:01:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.691 00:01:11 -- common/autotest_common.sh@862 -- # return 0 00:27:40.691 00:01:11 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:40.691 00:01:11 -- ftl/common.sh@95 -- # return 0 00:27:40.691 00:01:11 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:40.691 00:01:11 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:40.691 00:01:11 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:40.691 00:01:11 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.691 00:01:11 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:40.691 00:01:11 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.691 00:01:11 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.691 00:01:11 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.691 00:01:11 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.691 00:01:11 -- ftl/common.sh@154 -- # return 0 00:27:40.691 00:01:11 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.691 [2024-12-14 00:01:11.274112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:40.691 [2024-12-14 00:01:11.274553] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79640 ] 00:27:40.950 [2024-12-14 00:01:11.422190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.950 [2024-12-14 00:01:11.559575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.331  [2024-12-14T00:01:14.002Z] Copying: 511/1024 [MB] (511 MBps) [2024-12-14T00:01:16.547Z] Copying: 1024/1024 [MB] (average 554 MBps) 00:27:45.815 00:27:45.815 00:01:16 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:45.815 00:01:16 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@103 -- # sum=ec0d642a8d02ab505b54a4c6198d27bc 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@105 -- # [[ ec0d642a8d02ab505b54a4c6198d27bc != \e\c\0\d\6\4\2\a\8\d\0\2\a\b\5\0\5\b\5\4\a\4\c\6\1\9\8\d\2\7\b\c ]] 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:47.720 Validate MD5 checksum, iteration 2 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:47.720 00:01:18 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:47.720 00:01:18 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.720 00:01:18 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.720 00:01:18 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.720 00:01:18 -- ftl/common.sh@154 -- # return 0 00:27:47.720 00:01:18 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:47.720 [2024-12-14 00:01:18.296281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:47.720 [2024-12-14 00:01:18.297092] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79718 ] 00:27:47.720 [2024-12-14 00:01:18.445408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.978 [2024-12-14 00:01:18.612988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.881  [2024-12-14T00:01:20.872Z] Copying: 616/1024 [MB] (616 MBps) [2024-12-14T00:01:22.772Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:27:52.040 00:27:52.040 00:01:22 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:52.040 00:01:22 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@103 -- # sum=c52cf72a2815a5a6e04f0cb9cc8e59d9 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@105 -- # [[ c52cf72a2815a5a6e04f0cb9cc8e59d9 != \c\5\2\c\f\7\2\a\2\8\1\5\a\5\a\6\e\0\4\f\0\c\b\9\c\c\8\e\5\9\d\9 ]] 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:54.576 00:01:24 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:54.576 00:01:24 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:54.576 00:01:24 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:54.576 00:01:24 -- ftl/common.sh@130 -- # [[ -n 79596 ]] 00:27:54.576 00:01:24 -- ftl/common.sh@131 -- # killprocess 79596 00:27:54.576 00:01:24 -- common/autotest_common.sh@936 -- # '[' -z 79596 ']' 00:27:54.576 00:01:24 -- common/autotest_common.sh@940 -- # kill -0 79596 00:27:54.576 00:01:24 -- common/autotest_common.sh@941 -- # uname 00:27:54.576 00:01:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:54.576 00:01:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79596 00:27:54.576 killing process with pid 79596 00:27:54.576 00:01:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:54.576 00:01:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:54.576 00:01:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79596' 00:27:54.576 00:01:24 -- common/autotest_common.sh@955 -- # kill 79596 00:27:54.576 00:01:24 -- common/autotest_common.sh@960 -- # wait 79596 00:27:55.143 [2024-12-14 00:01:25.572291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:55.143 [2024-12-14 00:01:25.582857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.143 [2024-12-14 00:01:25.582893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:55.143 [2024-12-14 00:01:25.582904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:55.143 [2024-12-14 00:01:25.582912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.143 [2024-12-14 00:01:25.582931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:55.143 [2024-12-14 00:01:25.585155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.143 [2024-12-14 00:01:25.585188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:55.143 [2024-12-14 00:01:25.585196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.213 ms 00:27:55.143 [2024-12-14 00:01:25.585203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.143 [2024-12-14 00:01:25.585397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.143 [2024-12-14 00:01:25.585408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:55.143 [2024-12-14 00:01:25.585415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:27:55.143 [2024-12-14 00:01:25.585422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.143 [2024-12-14 00:01:25.587387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.143 [2024-12-14 00:01:25.587416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:55.143 [2024-12-14 00:01:25.587424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.951 ms 00:27:55.143 [2024-12-14 00:01:25.587430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.143 [2024-12-14 00:01:25.588335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.143 [2024-12-14 00:01:25.588358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:55.143 [2024-12-14 00:01:25.588366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:27:55.143 [2024-12-14 00:01:25.588373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.143 [2024-12-14 00:01:25.596731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.143 [2024-12-14 00:01:25.596876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:55.143 [2024-12-14 00:01:25.596889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.329 ms 00:27:55.143 [2024-12-14 00:01:25.596896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.601381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.601413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:55.144 [2024-12-14 00:01:25.601422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.459 ms 00:27:55.144 [2024-12-14 00:01:25.601429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.601510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.601518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:55.144 [2024-12-14 00:01:25.601525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:55.144 [2024-12-14 00:01:25.601532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.609474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.609505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:55.144 [2024-12-14 00:01:25.609513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.928 ms 00:27:55.144 [2024-12-14 00:01:25.609518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.617251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.617276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:55.144 [2024-12-14 00:01:25.617283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.706 ms 00:27:55.144 [2024-12-14 00:01:25.617288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.624461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.624620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:55.144 [2024-12-14 00:01:25.624632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.146 ms 00:27:55.144 [2024-12-14 00:01:25.624638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.631855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.631953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:55.144 [2024-12-14 00:01:25.631965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.171 ms 00:27:55.144 [2024-12-14 00:01:25.631971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.631994] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:55.144 [2024-12-14 00:01:25.632007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:55.144 [2024-12-14 00:01:25.632018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:55.144 [2024-12-14 00:01:25.632025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:55.144 [2024-12-14 00:01:25.632031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:55.144 [2024-12-14 00:01:25.632136] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:55.144 [2024-12-14 00:01:25.632142] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 052ee666-b6a5-4829-8298-3f5b39348d72 00:27:55.144 [2024-12-14 00:01:25.632149] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:55.144 [2024-12-14 00:01:25.632156] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:55.144 [2024-12-14 00:01:25.632162] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:55.144 [2024-12-14 00:01:25.632168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:55.144 [2024-12-14 00:01:25.632174] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:55.144 [2024-12-14 00:01:25.632180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:55.144 [2024-12-14 00:01:25.632186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:55.144 [2024-12-14 00:01:25.632190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:55.144 [2024-12-14 00:01:25.632195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:55.144 [2024-12-14 00:01:25.632203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.632210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:55.144 [2024-12-14 00:01:25.632217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:27:55.144 [2024-12-14 00:01:25.632226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.642606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.642698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:55.144 [2024-12-14 00:01:25.642710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.359 ms 00:27:55.144 [2024-12-14 00:01:25.642716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.642878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.144 [2024-12-14 00:01:25.642886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:55.144 [2024-12-14 00:01:25.642896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:27:55.144 [2024-12-14 00:01:25.642903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.680305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.680332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:55.144 [2024-12-14 00:01:25.680340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.680347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.680375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.680381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:55.144 [2024-12-14 00:01:25.680392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.680398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.680449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.680457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:55.144 [2024-12-14 00:01:25.680464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.680470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.680496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.680503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:55.144 [2024-12-14 00:01:25.680511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.680520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.742003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.742147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:55.144 [2024-12-14 00:01:25.742160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.742168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.765547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:55.144 [2024-12-14 00:01:25.765561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.765567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.765625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:55.144 [2024-12-14 00:01:25.765632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.765638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.765677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:55.144 [2024-12-14 00:01:25.765684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.765690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.765773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:55.144 [2024-12-14 00:01:25.765779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.765785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.765818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:55.144 [2024-12-14 00:01:25.765824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.765830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.144 [2024-12-14 00:01:25.765872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:55.144 [2024-12-14 00:01:25.765879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.144 [2024-12-14 00:01:25.765886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.144 [2024-12-14 00:01:25.765925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.145 [2024-12-14 00:01:25.765932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:55.145 [2024-12-14 00:01:25.765938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.145 [2024-12-14 00:01:25.765944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.145 [2024-12-14 00:01:25.766051] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 183.168 ms, result 0 00:27:56.083 00:01:26 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:56.084 00:01:26 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:56.084 00:01:26 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:56.084 00:01:26 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:56.084 00:01:26 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:56.084 00:01:26 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:56.084 Remove shared memory files 00:27:56.084 00:01:26 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:56.084 00:01:26 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:56.084 00:01:26 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:56.084 00:01:26 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:56.084 00:01:26 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79404 00:27:56.084 00:01:26 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:56.084 00:01:26 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:56.084 ************************************ 00:27:56.084 END TEST ftl_upgrade_shutdown 00:27:56.084 ************************************ 00:27:56.084 00:27:56.084 real 1m27.250s 00:27:56.084 user 1m57.663s 00:27:56.084 sys 0m19.156s 00:27:56.084 00:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:56.084 00:01:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.084 00:01:26 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:56.084 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:56.084 00:01:26 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:56.084 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:56.084 00:01:26 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:56.084 00:01:26 -- ftl/ftl.sh@14 -- # killprocess 70616 00:27:56.084 00:01:26 -- common/autotest_common.sh@936 -- # '[' -z 70616 ']' 00:27:56.084 Process with pid 70616 is not found 00:27:56.084 00:01:26 -- common/autotest_common.sh@940 -- # kill -0 70616 00:27:56.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70616) - No such process 00:27:56.084 00:01:26 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70616 is not found' 00:27:56.084 00:01:26 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:56.084 00:01:26 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79844 00:27:56.084 00:01:26 -- ftl/ftl.sh@20 -- # waitforlisten 79844 00:27:56.084 00:01:26 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:56.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.084 00:01:26 -- common/autotest_common.sh@829 -- # '[' -z 79844 ']' 00:27:56.084 00:01:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.084 00:01:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.084 00:01:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.084 00:01:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.084 00:01:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.344 [2024-12-14 00:01:26.839142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:56.344 [2024-12-14 00:01:26.839229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79844 ] 00:27:56.344 [2024-12-14 00:01:26.982140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.605 [2024-12-14 00:01:27.208276] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.605 [2024-12-14 00:01:27.208531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.036 00:01:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.036 00:01:28 -- common/autotest_common.sh@862 -- # return 0 00:27:58.036 00:01:28 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:58.036 nvme0n1 00:27:58.036 00:01:28 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:58.036 00:01:28 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:58.036 00:01:28 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:58.294 00:01:28 -- ftl/common.sh@28 -- # stores=788e2019-85c6-4316-8e13-8193a9d00068 00:27:58.294 00:01:28 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:58.294 00:01:28 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 788e2019-85c6-4316-8e13-8193a9d00068 00:27:58.294 00:01:29 -- ftl/ftl.sh@23 -- # killprocess 79844 00:27:58.294 00:01:29 -- common/autotest_common.sh@936 -- # '[' -z 79844 ']' 00:27:58.294 00:01:29 -- common/autotest_common.sh@940 -- # kill -0 79844 00:27:58.294 00:01:29 -- common/autotest_common.sh@941 -- # uname 00:27:58.294 00:01:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:58.294 00:01:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79844 00:27:58.553 killing process with pid 79844 00:27:58.553 00:01:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:58.553 00:01:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:58.553 00:01:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79844' 00:27:58.553 00:01:29 -- common/autotest_common.sh@955 -- # kill 79844 00:27:58.553 00:01:29 -- common/autotest_common.sh@960 -- # wait 79844 00:27:59.488 00:01:30 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:59.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:59.749 Waiting for block devices as requested 00:27:59.749 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:00.011 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:00.011 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:00.011 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:05.291 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:05.291 Remove shared memory files 00:28:05.291 00:01:35 -- ftl/ftl.sh@28 -- # remove_shm 00:28:05.291 00:01:35 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:05.291 00:01:35 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:05.291 00:01:35 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:05.291 00:01:35 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:05.291 00:01:35 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:05.291 00:01:35 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:05.291 ************************************ 00:28:05.291 END TEST ftl 00:28:05.291 ************************************ 00:28:05.291 00:28:05.291 real 13m21.028s 00:28:05.291 user 15m18.881s 00:28:05.291 sys 1m6.740s 00:28:05.291 00:01:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:05.291 00:01:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.291 00:01:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:05.291 00:01:35 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:05.291 00:01:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:05.291 00:01:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:05.291 00:01:35 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:28:05.291 00:01:35 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:28:05.291 00:01:35 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:05.291 00:01:35 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:05.291 00:01:35 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:05.291 00:01:35 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:28:05.291 00:01:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:05.291 00:01:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.291 00:01:35 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:28:05.291 00:01:35 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:28:05.291 00:01:35 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:28:05.291 00:01:35 -- common/autotest_common.sh@10 -- # set +x 00:28:06.674 INFO: APP EXITING 00:28:06.674 INFO: killing all VMs 00:28:06.674 INFO: killing vhost app 00:28:06.674 INFO: EXIT DONE 00:28:07.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:07.505 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:07.505 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:07.505 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:07.505 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:08.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:08.335 Cleaning 00:28:08.335 Removing: /var/run/dpdk/spdk0/config 00:28:08.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:08.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:08.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:08.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:08.335 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:08.335 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:08.335 Removing: /var/run/dpdk/spdk0 00:28:08.335 Removing: /var/run/dpdk/spdk_pid55998 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56188 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56506 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56586 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56694 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56806 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56891 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56936 00:28:08.335 Removing: /var/run/dpdk/spdk_pid56967 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57042 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57148 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57572 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57625 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57683 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57699 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57792 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57808 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57912 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57928 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57981 00:28:08.335 Removing: /var/run/dpdk/spdk_pid57999 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58052 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58070 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58233 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58270 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58352 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58417 00:28:08.335 Removing: /var/run/dpdk/spdk_pid58442 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58515 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58541 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58576 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58602 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58643 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58669 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58716 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58742 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58783 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58809 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58844 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58870 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58919 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58945 00:28:08.336 Removing: /var/run/dpdk/spdk_pid58976 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59010 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59045 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59071 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59112 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59138 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59179 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59205 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59246 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59271 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59308 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59334 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59375 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59401 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59442 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59468 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59509 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59538 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59579 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59608 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59652 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59681 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59725 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59746 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59787 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59807 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59849 00:28:08.336 Removing: /var/run/dpdk/spdk_pid59933 00:28:08.336 Removing: /var/run/dpdk/spdk_pid60045 00:28:08.336 Removing: /var/run/dpdk/spdk_pid60210 00:28:08.336 Removing: /var/run/dpdk/spdk_pid60302 00:28:08.336 Removing: /var/run/dpdk/spdk_pid60338 00:28:08.336 Removing: /var/run/dpdk/spdk_pid60781 00:28:08.336 Removing: /var/run/dpdk/spdk_pid60997 00:28:08.336 Removing: /var/run/dpdk/spdk_pid61117 00:28:08.336 Removing: /var/run/dpdk/spdk_pid61165 00:28:08.336 Removing: /var/run/dpdk/spdk_pid61196 00:28:08.336 Removing: /var/run/dpdk/spdk_pid61279 00:28:08.336 Removing: /var/run/dpdk/spdk_pid61933 00:28:08.336 Removing: /var/run/dpdk/spdk_pid61964 00:28:08.336 Removing: /var/run/dpdk/spdk_pid62448 00:28:08.596 Removing: /var/run/dpdk/spdk_pid62552 00:28:08.596 Removing: /var/run/dpdk/spdk_pid62667 00:28:08.596 Removing: /var/run/dpdk/spdk_pid62714 00:28:08.596 Removing: /var/run/dpdk/spdk_pid62745 00:28:08.596 Removing: /var/run/dpdk/spdk_pid62771 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64699 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64838 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64842 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64859 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64935 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64939 00:28:08.596 Removing: /var/run/dpdk/spdk_pid64951 00:28:08.596 Removing: /var/run/dpdk/spdk_pid65023 00:28:08.596 Removing: /var/run/dpdk/spdk_pid65027 00:28:08.596 Removing: /var/run/dpdk/spdk_pid65045 00:28:08.596 Removing: /var/run/dpdk/spdk_pid65111 00:28:08.596 Removing: /var/run/dpdk/spdk_pid65115 00:28:08.596 Removing: /var/run/dpdk/spdk_pid65133 00:28:08.596 Removing: /var/run/dpdk/spdk_pid66607 00:28:08.596 Removing: /var/run/dpdk/spdk_pid66720 00:28:08.596 Removing: /var/run/dpdk/spdk_pid66858 00:28:08.596 Removing: /var/run/dpdk/spdk_pid66946 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67022 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67098 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67203 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67277 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67418 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67806 00:28:08.596 Removing: /var/run/dpdk/spdk_pid67837 00:28:08.596 Removing: /var/run/dpdk/spdk_pid68277 00:28:08.596 Removing: /var/run/dpdk/spdk_pid68464 00:28:08.596 Removing: /var/run/dpdk/spdk_pid68569 00:28:08.596 Removing: /var/run/dpdk/spdk_pid68673 00:28:08.596 Removing: /var/run/dpdk/spdk_pid68732 00:28:08.596 Removing: /var/run/dpdk/spdk_pid68762 00:28:08.596 Removing: /var/run/dpdk/spdk_pid69122 00:28:08.596 Removing: /var/run/dpdk/spdk_pid69185 00:28:08.596 Removing: /var/run/dpdk/spdk_pid69260 00:28:08.596 Removing: /var/run/dpdk/spdk_pid69644 00:28:08.596 Removing: /var/run/dpdk/spdk_pid69803 00:28:08.596 Removing: /var/run/dpdk/spdk_pid70616 00:28:08.596 Removing: /var/run/dpdk/spdk_pid70747 00:28:08.596 Removing: /var/run/dpdk/spdk_pid70968 00:28:08.596 Removing: /var/run/dpdk/spdk_pid71053 00:28:08.596 Removing: /var/run/dpdk/spdk_pid71340 00:28:08.596 Removing: /var/run/dpdk/spdk_pid71600 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72025 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72257 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72444 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72491 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72750 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72775 00:28:08.596 Removing: /var/run/dpdk/spdk_pid72835 00:28:08.596 Removing: /var/run/dpdk/spdk_pid73172 00:28:08.596 Removing: /var/run/dpdk/spdk_pid73448 00:28:08.596 Removing: /var/run/dpdk/spdk_pid74020 00:28:08.596 Removing: /var/run/dpdk/spdk_pid74790 00:28:08.596 Removing: /var/run/dpdk/spdk_pid75409 00:28:08.596 Removing: /var/run/dpdk/spdk_pid76126 00:28:08.596 Removing: /var/run/dpdk/spdk_pid76283 00:28:08.596 Removing: /var/run/dpdk/spdk_pid76371 00:28:08.596 Removing: /var/run/dpdk/spdk_pid76720 00:28:08.596 Removing: /var/run/dpdk/spdk_pid76775 00:28:08.596 Removing: /var/run/dpdk/spdk_pid77465 00:28:08.596 Removing: /var/run/dpdk/spdk_pid77974 00:28:08.596 Removing: /var/run/dpdk/spdk_pid78799 00:28:08.596 Removing: /var/run/dpdk/spdk_pid78965 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79005 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79059 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79115 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79179 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79404 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79456 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79518 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79596 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79640 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79718 00:28:08.596 Removing: /var/run/dpdk/spdk_pid79844 00:28:08.596 Clean 00:28:08.857 killing process with pid 48160 00:28:08.857 killing process with pid 48161 00:28:08.857 00:01:39 -- common/autotest_common.sh@1446 -- # return 0 00:28:08.857 00:01:39 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:28:08.857 00:01:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.857 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:28:08.857 00:01:39 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:28:08.857 00:01:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.857 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:28:08.857 00:01:39 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:08.857 00:01:39 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:08.857 00:01:39 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:08.857 00:01:39 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:28:08.857 00:01:39 -- spdk/autotest.sh@383 -- # hostname 00:28:08.857 00:01:39 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:09.118 geninfo: WARNING: invalid characters removed from testname! 00:28:35.705 00:02:02 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:35.705 00:02:06 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:38.251 00:02:08 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:40.797 00:02:11 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:43.344 00:02:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:45.259 00:02:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:47.236 00:02:17 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:47.236 00:02:17 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:47.236 00:02:17 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:47.236 00:02:17 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:47.498 00:02:17 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:47.498 00:02:17 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:47.498 00:02:17 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:47.498 00:02:17 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:47.498 00:02:17 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:47.498 00:02:17 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:47.498 00:02:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:47.498 00:02:17 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:47.498 00:02:17 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:47.498 00:02:17 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:47.498 00:02:17 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:47.498 00:02:17 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:47.498 00:02:17 -- scripts/common.sh@343 -- $ case "$op" in 00:28:47.498 00:02:17 -- scripts/common.sh@344 -- $ : 1 00:28:47.498 00:02:17 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:47.498 00:02:17 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.498 00:02:17 -- scripts/common.sh@364 -- $ decimal 1 00:28:47.498 00:02:18 -- scripts/common.sh@352 -- $ local d=1 00:28:47.498 00:02:18 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:47.498 00:02:18 -- scripts/common.sh@354 -- $ echo 1 00:28:47.498 00:02:18 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:47.498 00:02:18 -- scripts/common.sh@365 -- $ decimal 2 00:28:47.498 00:02:18 -- scripts/common.sh@352 -- $ local d=2 00:28:47.498 00:02:18 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:47.498 00:02:18 -- scripts/common.sh@354 -- $ echo 2 00:28:47.498 00:02:18 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:47.498 00:02:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:47.498 00:02:18 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:47.498 00:02:18 -- scripts/common.sh@367 -- $ return 0 00:28:47.498 00:02:18 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.498 00:02:18 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.498 --rc genhtml_branch_coverage=1 00:28:47.498 --rc genhtml_function_coverage=1 00:28:47.498 --rc genhtml_legend=1 00:28:47.498 --rc geninfo_all_blocks=1 00:28:47.498 --rc geninfo_unexecuted_blocks=1 00:28:47.498 00:28:47.498 ' 00:28:47.498 00:02:18 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.498 --rc genhtml_branch_coverage=1 00:28:47.498 --rc genhtml_function_coverage=1 00:28:47.498 --rc genhtml_legend=1 00:28:47.498 --rc geninfo_all_blocks=1 00:28:47.498 --rc geninfo_unexecuted_blocks=1 00:28:47.498 00:28:47.498 ' 00:28:47.498 00:02:18 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.498 --rc genhtml_branch_coverage=1 00:28:47.498 --rc genhtml_function_coverage=1 00:28:47.498 --rc genhtml_legend=1 00:28:47.498 --rc geninfo_all_blocks=1 00:28:47.498 --rc geninfo_unexecuted_blocks=1 00:28:47.498 00:28:47.498 ' 00:28:47.498 00:02:18 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.498 --rc genhtml_branch_coverage=1 00:28:47.498 --rc genhtml_function_coverage=1 00:28:47.498 --rc genhtml_legend=1 00:28:47.498 --rc geninfo_all_blocks=1 00:28:47.498 --rc geninfo_unexecuted_blocks=1 00:28:47.498 00:28:47.498 ' 00:28:47.498 00:02:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:47.498 00:02:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:47.498 00:02:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.498 00:02:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.498 00:02:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.498 00:02:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.498 00:02:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.498 00:02:18 -- paths/export.sh@5 -- $ export PATH 00:28:47.498 00:02:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.498 00:02:18 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:47.498 00:02:18 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:47.498 00:02:18 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734134538.XXXXXX 00:28:47.498 00:02:18 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734134538.K6mdSV 00:28:47.498 00:02:18 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:47.498 00:02:18 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:47.498 00:02:18 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:47.498 00:02:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:47.498 00:02:18 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:47.498 00:02:18 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:47.498 00:02:18 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:47.498 00:02:18 -- common/autotest_common.sh@10 -- $ set +x 00:28:47.499 00:02:18 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:47.499 00:02:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:47.499 00:02:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:47.499 00:02:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:47.499 00:02:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:47.499 00:02:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:47.499 00:02:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:47.499 00:02:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:47.499 00:02:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:47.499 00:02:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:47.499 00:02:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:47.499 + [[ -n 4980 ]] 00:28:47.499 + sudo kill 4980 00:28:47.509 [Pipeline] } 00:28:47.525 [Pipeline] // timeout 00:28:47.530 [Pipeline] } 00:28:47.545 [Pipeline] // stage 00:28:47.549 [Pipeline] } 00:28:47.562 [Pipeline] // catchError 00:28:47.570 [Pipeline] stage 00:28:47.573 [Pipeline] { (Stop VM) 00:28:47.583 [Pipeline] sh 00:28:47.864 + vagrant halt 00:28:51.169 ==> default: Halting domain... 00:28:57.774 [Pipeline] sh 00:28:58.059 + vagrant destroy -f 00:29:00.606 ==> default: Removing domain... 00:29:01.193 [Pipeline] sh 00:29:01.478 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:01.488 [Pipeline] } 00:29:01.503 [Pipeline] // stage 00:29:01.509 [Pipeline] } 00:29:01.523 [Pipeline] // dir 00:29:01.528 [Pipeline] } 00:29:01.542 [Pipeline] // wrap 00:29:01.548 [Pipeline] } 00:29:01.561 [Pipeline] // catchError 00:29:01.571 [Pipeline] stage 00:29:01.573 [Pipeline] { (Epilogue) 00:29:01.586 [Pipeline] sh 00:29:01.872 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:06.084 [Pipeline] catchError 00:29:06.086 [Pipeline] { 00:29:06.098 [Pipeline] sh 00:29:06.383 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:06.383 Artifacts sizes are good 00:29:06.394 [Pipeline] } 00:29:06.408 [Pipeline] // catchError 00:29:06.419 [Pipeline] archiveArtifacts 00:29:06.427 Archiving artifacts 00:29:06.533 [Pipeline] cleanWs 00:29:06.576 [WS-CLEANUP] Deleting project workspace... 00:29:06.576 [WS-CLEANUP] Deferred wipeout is used... 00:29:06.585 [WS-CLEANUP] done 00:29:06.587 [Pipeline] } 00:29:06.602 [Pipeline] // stage 00:29:06.607 [Pipeline] } 00:29:06.621 [Pipeline] // node 00:29:06.627 [Pipeline] End of Pipeline 00:29:06.688 Finished: SUCCESS