00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2470 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3731 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.134 The recommended git tool is: git 00:00:00.134 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.176 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.592 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.604 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.616 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.616 > git config core.sparsecheckout # timeout=10 00:00:07.626 > git read-tree -mu HEAD # timeout=10 00:00:07.641 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.660 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.661 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.746 [Pipeline] Start of Pipeline 00:00:07.760 [Pipeline] library 00:00:07.762 Loading library shm_lib@master 00:00:07.762 Library shm_lib@master is cached. Copying from home. 00:00:07.774 [Pipeline] node 00:00:07.784 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.786 [Pipeline] { 00:00:07.796 [Pipeline] catchError 00:00:07.797 [Pipeline] { 00:00:07.809 [Pipeline] wrap 00:00:07.818 [Pipeline] { 00:00:07.826 [Pipeline] stage 00:00:07.828 [Pipeline] { (Prologue) 00:00:07.845 [Pipeline] echo 00:00:07.846 Node: VM-host-SM38 00:00:07.852 [Pipeline] cleanWs 00:00:07.862 [WS-CLEANUP] Deleting project workspace... 00:00:07.862 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.869 [WS-CLEANUP] done 00:00:08.061 [Pipeline] setCustomBuildProperty 00:00:08.139 [Pipeline] httpRequest 00:00:08.485 [Pipeline] echo 00:00:08.487 Sorcerer 10.211.164.20 is alive 00:00:08.496 [Pipeline] retry 00:00:08.498 [Pipeline] { 00:00:08.513 [Pipeline] httpRequest 00:00:08.518 HttpMethod: GET 00:00:08.518 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.519 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.534 Response Code: HTTP/1.1 200 OK 00:00:08.535 Success: Status code 200 is in the accepted range: 200,404 00:00:08.536 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.525 [Pipeline] } 00:00:11.543 [Pipeline] // retry 00:00:11.550 [Pipeline] sh 00:00:11.835 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.852 [Pipeline] httpRequest 00:00:12.441 [Pipeline] echo 00:00:12.442 Sorcerer 10.211.164.20 is alive 00:00:12.452 [Pipeline] retry 00:00:12.453 [Pipeline] { 00:00:12.468 [Pipeline] httpRequest 00:00:12.473 HttpMethod: GET 00:00:12.473 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.474 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.488 Response Code: HTTP/1.1 200 OK 00:00:12.488 Success: Status code 200 is in the accepted range: 200,404 00:00:12.489 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:52.361 [Pipeline] } 00:00:52.379 [Pipeline] // retry 00:00:52.387 [Pipeline] sh 00:00:52.674 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:55.229 [Pipeline] sh 00:00:55.514 + git -C spdk log --oneline -n5 00:00:55.514 c13c99a5e test: Various fixes for Fedora40 00:00:55.514 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:55.515 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:55.515 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:55.515 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:55.534 [Pipeline] writeFile 00:00:55.549 [Pipeline] sh 00:00:55.834 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.847 [Pipeline] sh 00:00:56.133 + cat autorun-spdk.conf 00:00:56.133 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.133 SPDK_TEST_NVME=1 00:00:56.133 SPDK_TEST_FTL=1 00:00:56.133 SPDK_TEST_ISAL=1 00:00:56.133 SPDK_RUN_ASAN=1 00:00:56.133 SPDK_RUN_UBSAN=1 00:00:56.133 SPDK_TEST_XNVME=1 00:00:56.133 SPDK_TEST_NVME_FDP=1 00:00:56.133 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.142 RUN_NIGHTLY=1 00:00:56.144 [Pipeline] } 00:00:56.157 [Pipeline] // stage 00:00:56.171 [Pipeline] stage 00:00:56.173 [Pipeline] { (Run VM) 00:00:56.186 [Pipeline] sh 00:00:56.471 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.472 + echo 'Start stage prepare_nvme.sh' 00:00:56.472 Start stage prepare_nvme.sh 00:00:56.472 + [[ -n 10 ]] 00:00:56.472 + disk_prefix=ex10 00:00:56.472 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:56.472 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:56.472 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:56.472 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.472 ++ SPDK_TEST_NVME=1 00:00:56.472 ++ SPDK_TEST_FTL=1 00:00:56.472 ++ SPDK_TEST_ISAL=1 00:00:56.472 ++ SPDK_RUN_ASAN=1 00:00:56.472 ++ SPDK_RUN_UBSAN=1 00:00:56.472 ++ SPDK_TEST_XNVME=1 00:00:56.472 ++ SPDK_TEST_NVME_FDP=1 00:00:56.472 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.472 ++ RUN_NIGHTLY=1 00:00:56.472 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:56.472 + nvme_files=() 00:00:56.472 + declare -A nvme_files 00:00:56.472 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.472 + nvme_files['nvme.img']=5G 00:00:56.472 + nvme_files['nvme-cmb.img']=5G 00:00:56.472 + nvme_files['nvme-multi0.img']=4G 00:00:56.472 + nvme_files['nvme-multi1.img']=4G 00:00:56.472 + nvme_files['nvme-multi2.img']=4G 00:00:56.472 + nvme_files['nvme-openstack.img']=8G 00:00:56.472 + nvme_files['nvme-zns.img']=5G 00:00:56.472 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.472 + (( SPDK_TEST_FTL == 1 )) 00:00:56.472 + nvme_files["nvme-ftl.img"]=6G 00:00:56.472 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.472 + nvme_files["nvme-fdp.img"]=1G 00:00:56.472 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.472 + for nvme in "${!nvme_files[@]}" 00:00:56.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:00:56.472 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.472 + for nvme in "${!nvme_files[@]}" 00:00:56.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:00:56.732 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:56.732 + for nvme in "${!nvme_files[@]}" 00:00:56.732 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:00:56.732 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.732 + for nvme in "${!nvme_files[@]}" 00:00:56.732 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:00:56.993 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.993 + for nvme in "${!nvme_files[@]}" 00:00:56.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:00:56.993 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.993 + for nvme in "${!nvme_files[@]}" 00:00:56.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:00:56.993 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.993 + for nvme in "${!nvme_files[@]}" 00:00:56.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:00:57.252 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.252 + for nvme in "${!nvme_files[@]}" 00:00:57.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:00:57.252 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:57.252 + for nvme in "${!nvme_files[@]}" 00:00:57.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:00:57.513 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.513 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:00:57.513 + echo 'End stage prepare_nvme.sh' 00:00:57.513 End stage prepare_nvme.sh 00:00:57.527 [Pipeline] sh 00:00:57.812 + DISTRO=fedora39 00:00:57.812 + CPUS=10 00:00:57.812 + RAM=12288 00:00:57.812 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:57.812 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:57.812 00:00:57.812 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:57.812 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:57.812 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:57.812 HELP=0 00:00:57.812 DRY_RUN=0 00:00:57.813 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:00:57.813 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:57.813 NVME_AUTO_CREATE=0 00:00:57.813 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:00:57.813 NVME_CMB=,,,, 00:00:57.813 NVME_PMR=,,,, 00:00:57.813 NVME_ZNS=,,,, 00:00:57.813 NVME_MS=true,,,, 00:00:57.813 NVME_FDP=,,,on, 00:00:57.813 SPDK_VAGRANT_DISTRO=fedora39 00:00:57.813 SPDK_VAGRANT_VMCPU=10 00:00:57.813 SPDK_VAGRANT_VMRAM=12288 00:00:57.813 SPDK_VAGRANT_PROVIDER=libvirt 00:00:57.813 SPDK_VAGRANT_HTTP_PROXY= 00:00:57.813 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:57.813 SPDK_OPENSTACK_NETWORK=0 00:00:57.813 VAGRANT_PACKAGE_BOX=0 00:00:57.813 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:57.813 FORCE_DISTRO=true 00:00:57.813 VAGRANT_BOX_VERSION= 00:00:57.813 EXTRA_VAGRANTFILES= 00:00:57.813 NIC_MODEL=e1000 00:00:57.813 00:00:57.813 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:57.813 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:59.721 Bringing machine 'default' up with 'libvirt' provider... 00:01:00.289 ==> default: Creating image (snapshot of base box volume). 00:01:00.289 ==> default: Creating domain with the following settings... 00:01:00.289 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734354314_b6ff3986e099836b69c6 00:01:00.289 ==> default: -- Domain type: kvm 00:01:00.289 ==> default: -- Cpus: 10 00:01:00.289 ==> default: -- Feature: acpi 00:01:00.289 ==> default: -- Feature: apic 00:01:00.289 ==> default: -- Feature: pae 00:01:00.289 ==> default: -- Memory: 12288M 00:01:00.289 ==> default: -- Memory Backing: hugepages: 00:01:00.289 ==> default: -- Management MAC: 00:01:00.289 ==> default: -- Loader: 00:01:00.289 ==> default: -- Nvram: 00:01:00.289 ==> default: -- Base box: spdk/fedora39 00:01:00.289 ==> default: -- Storage pool: default 00:01:00.289 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734354314_b6ff3986e099836b69c6.img (20G) 00:01:00.289 ==> default: -- Volume Cache: default 00:01:00.289 ==> default: -- Kernel: 00:01:00.289 ==> default: -- Initrd: 00:01:00.289 ==> default: -- Graphics Type: vnc 00:01:00.289 ==> default: -- Graphics Port: -1 00:01:00.289 ==> default: -- Graphics IP: 127.0.0.1 00:01:00.289 ==> default: -- Graphics Password: Not defined 00:01:00.289 ==> default: -- Video Type: cirrus 00:01:00.289 ==> default: -- Video VRAM: 9216 00:01:00.289 ==> default: -- Sound Type: 00:01:00.289 ==> default: -- Keymap: en-us 00:01:00.289 ==> default: -- TPM Path: 00:01:00.289 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:00.289 ==> default: -- Command line args: 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:00.289 ==> default: -> value=-drive, 00:01:00.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:00.289 ==> default: -> value=-drive, 00:01:00.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:00.289 ==> default: -> value=-drive, 00:01:00.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.289 ==> default: -> value=-drive, 00:01:00.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:00.289 ==> default: -> value=-device, 00:01:00.289 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.289 ==> default: -> value=-drive, 00:01:00.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:00.289 ==> default: -> value=-device, 00:01:00.563 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.563 ==> default: -> value=-device, 00:01:00.563 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:00.563 ==> default: -> value=-device, 00:01:00.563 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:00.563 ==> default: -> value=-drive, 00:01:00.563 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:00.563 ==> default: -> value=-device, 00:01:00.563 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.563 ==> default: Creating shared folders metadata... 00:01:00.563 ==> default: Starting domain. 00:01:02.477 ==> default: Waiting for domain to get an IP address... 00:01:20.604 ==> default: Waiting for SSH to become available... 00:01:20.604 ==> default: Configuring and enabling network interfaces... 00:01:23.932 default: SSH address: 192.168.121.177:22 00:01:23.932 default: SSH username: vagrant 00:01:23.932 default: SSH auth method: private key 00:01:25.850 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:33.998 ==> default: Mounting SSHFS shared folder... 00:01:35.919 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:35.919 ==> default: Checking Mount.. 00:01:37.324 ==> default: Folder Successfully Mounted! 00:01:37.324 00:01:37.324 SUCCESS! 00:01:37.324 00:01:37.324 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:37.324 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:37.324 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:37.324 00:01:37.334 [Pipeline] } 00:01:37.350 [Pipeline] // stage 00:01:37.360 [Pipeline] dir 00:01:37.361 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:37.363 [Pipeline] { 00:01:37.376 [Pipeline] catchError 00:01:37.378 [Pipeline] { 00:01:37.392 [Pipeline] sh 00:01:37.676 + vagrant ssh-config --host vagrant 00:01:37.676 + sed -ne '/^Host/,$p' 00:01:37.676 + tee ssh_conf 00:01:40.225 Host vagrant 00:01:40.225 HostName 192.168.121.177 00:01:40.225 User vagrant 00:01:40.225 Port 22 00:01:40.225 UserKnownHostsFile /dev/null 00:01:40.225 StrictHostKeyChecking no 00:01:40.225 PasswordAuthentication no 00:01:40.225 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:40.225 IdentitiesOnly yes 00:01:40.225 LogLevel FATAL 00:01:40.225 ForwardAgent yes 00:01:40.225 ForwardX11 yes 00:01:40.225 00:01:40.240 [Pipeline] withEnv 00:01:40.242 [Pipeline] { 00:01:40.256 [Pipeline] sh 00:01:40.541 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:40.541 source /etc/os-release 00:01:40.541 [[ -e /image.version ]] && img=$(< /image.version) 00:01:40.541 # Minimal, systemd-like check. 00:01:40.541 if [[ -e /.dockerenv ]]; then 00:01:40.541 # Clear garbage from the node'\''s name: 00:01:40.541 # agt-er_autotest_547-896 -> autotest_547-896 00:01:40.541 # $HOSTNAME is the actual container id 00:01:40.541 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:40.541 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:40.541 # We can assume this is a mount from a host where container is running, 00:01:40.541 # so fetch its hostname to easily identify the target swarm worker. 00:01:40.541 container="$(< /etc/hostname) ($agent)" 00:01:40.541 else 00:01:40.541 # Fallback 00:01:40.541 container=$agent 00:01:40.541 fi 00:01:40.541 fi 00:01:40.541 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:40.541 ' 00:01:40.815 [Pipeline] } 00:01:40.830 [Pipeline] // withEnv 00:01:40.853 [Pipeline] setCustomBuildProperty 00:01:40.901 [Pipeline] stage 00:01:40.903 [Pipeline] { (Tests) 00:01:40.913 [Pipeline] sh 00:01:41.192 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:41.467 [Pipeline] sh 00:01:41.753 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:42.032 [Pipeline] timeout 00:01:42.032 Timeout set to expire in 50 min 00:01:42.034 [Pipeline] { 00:01:42.049 [Pipeline] sh 00:01:42.335 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:42.944 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:42.957 [Pipeline] sh 00:01:43.243 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:43.520 [Pipeline] sh 00:01:43.806 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:44.089 [Pipeline] sh 00:01:44.377 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:44.638 ++ readlink -f spdk_repo 00:01:44.638 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:44.638 + [[ -n /home/vagrant/spdk_repo ]] 00:01:44.638 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:44.638 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:44.638 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:44.638 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:44.638 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:44.638 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:44.638 + cd /home/vagrant/spdk_repo 00:01:44.638 + source /etc/os-release 00:01:44.638 ++ NAME='Fedora Linux' 00:01:44.638 ++ VERSION='39 (Cloud Edition)' 00:01:44.638 ++ ID=fedora 00:01:44.638 ++ VERSION_ID=39 00:01:44.638 ++ VERSION_CODENAME= 00:01:44.638 ++ PLATFORM_ID=platform:f39 00:01:44.638 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:44.638 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.638 ++ LOGO=fedora-logo-icon 00:01:44.638 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:44.638 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.638 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:44.638 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.638 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.638 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.638 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:44.638 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.638 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:44.638 ++ SUPPORT_END=2024-11-12 00:01:44.638 ++ VARIANT='Cloud Edition' 00:01:44.638 ++ VARIANT_ID=cloud 00:01:44.638 + uname -a 00:01:44.638 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:44.638 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:44.638 Hugepages 00:01:44.638 node hugesize free / total 00:01:44.638 node0 1048576kB 0 / 0 00:01:44.638 node0 2048kB 0 / 0 00:01:44.638 00:01:44.638 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:44.638 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:44.638 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:44.638 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:44.900 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:44.900 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:44.900 + rm -f /tmp/spdk-ld-path 00:01:44.900 + source autorun-spdk.conf 00:01:44.900 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.900 ++ SPDK_TEST_NVME=1 00:01:44.900 ++ SPDK_TEST_FTL=1 00:01:44.900 ++ SPDK_TEST_ISAL=1 00:01:44.900 ++ SPDK_RUN_ASAN=1 00:01:44.900 ++ SPDK_RUN_UBSAN=1 00:01:44.900 ++ SPDK_TEST_XNVME=1 00:01:44.900 ++ SPDK_TEST_NVME_FDP=1 00:01:44.900 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.900 ++ RUN_NIGHTLY=1 00:01:44.900 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:44.900 + [[ -n '' ]] 00:01:44.900 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:44.900 + for M in /var/spdk/build-*-manifest.txt 00:01:44.900 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:44.900 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.900 + for M in /var/spdk/build-*-manifest.txt 00:01:44.900 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:44.900 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.900 + for M in /var/spdk/build-*-manifest.txt 00:01:44.900 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:44.900 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.900 ++ uname 00:01:44.900 + [[ Linux == \L\i\n\u\x ]] 00:01:44.900 + sudo dmesg -T 00:01:44.900 + sudo dmesg --clear 00:01:44.900 + dmesg_pid=4991 00:01:44.900 + [[ Fedora Linux == FreeBSD ]] 00:01:44.900 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.900 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.900 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.900 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.900 + sudo dmesg -Tw 00:01:44.900 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.900 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.900 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.900 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.900 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.900 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.900 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.900 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.900 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.900 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.900 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.900 Test configuration: 00:01:44.900 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.900 SPDK_TEST_NVME=1 00:01:44.900 SPDK_TEST_FTL=1 00:01:44.900 SPDK_TEST_ISAL=1 00:01:44.900 SPDK_RUN_ASAN=1 00:01:44.900 SPDK_RUN_UBSAN=1 00:01:44.900 SPDK_TEST_XNVME=1 00:01:44.900 SPDK_TEST_NVME_FDP=1 00:01:44.900 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.900 RUN_NIGHTLY=1 13:05:59 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:44.900 13:05:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:44.900 13:05:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.900 13:05:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.900 13:05:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.900 13:05:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.900 13:05:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.901 13:05:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.901 13:05:59 -- paths/export.sh@5 -- $ export PATH 00:01:44.901 13:05:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.901 13:05:59 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:44.901 13:05:59 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:44.901 13:05:59 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734354359.XXXXXX 00:01:44.901 13:05:59 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734354359.vD7Cah 00:01:44.901 13:05:59 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:44.901 13:05:59 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:44.901 13:05:59 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:44.901 13:05:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:44.901 13:05:59 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.901 13:05:59 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:44.901 13:05:59 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:44.901 13:05:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.162 13:05:59 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:45.162 13:05:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.162 13:05:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.162 13:05:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:45.162 13:05:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.162 Mon Dec 16 01:05:59 PM UTC 2024 00:01:45.162 13:05:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.162 LTS-67-gc13c99a5e 00:01:45.162 13:05:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:45.162 13:05:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:45.162 13:05:59 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:45.162 13:05:59 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:45.162 13:05:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.162 ************************************ 00:01:45.162 START TEST asan 00:01:45.162 ************************************ 00:01:45.162 using asan 00:01:45.162 13:05:59 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:45.162 00:01:45.162 real 0m0.000s 00:01:45.162 user 0m0.000s 00:01:45.162 sys 0m0.000s 00:01:45.162 13:05:59 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:45.162 13:05:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.162 ************************************ 00:01:45.162 END TEST asan 00:01:45.162 ************************************ 00:01:45.163 13:05:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:45.163 13:05:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:45.163 13:05:59 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:45.163 13:05:59 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:45.163 13:05:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.163 ************************************ 00:01:45.163 START TEST ubsan 00:01:45.163 ************************************ 00:01:45.163 using ubsan 00:01:45.163 13:05:59 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:45.163 00:01:45.163 real 0m0.000s 00:01:45.163 user 0m0.000s 00:01:45.163 sys 0m0.000s 00:01:45.163 ************************************ 00:01:45.163 END TEST ubsan 00:01:45.163 ************************************ 00:01:45.163 13:05:59 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:45.163 13:05:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.163 13:05:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:45.163 13:05:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:45.163 13:05:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:45.163 13:05:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:45.163 13:05:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:45.163 13:05:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:45.163 13:05:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:45.163 13:05:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:45.163 13:05:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:45.163 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:45.163 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:45.736 Using 'verbs' RDMA provider 00:01:58.551 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:08.565 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:08.565 Creating mk/config.mk...done. 00:02:08.565 Creating mk/cc.flags.mk...done. 00:02:08.565 Type 'make' to build. 00:02:08.565 13:06:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:08.565 13:06:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:08.565 13:06:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:08.565 13:06:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.565 ************************************ 00:02:08.565 START TEST make 00:02:08.565 ************************************ 00:02:08.565 13:06:22 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:08.565 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:08.565 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:08.565 meson setup builddir \ 00:02:08.565 -Dwith-libaio=enabled \ 00:02:08.565 -Dwith-liburing=enabled \ 00:02:08.565 -Dwith-libvfn=disabled \ 00:02:08.565 -Dwith-spdk=false && \ 00:02:08.565 meson compile -C builddir && \ 00:02:08.565 cd -) 00:02:08.827 make[1]: Nothing to be done for 'all'. 00:02:11.379 The Meson build system 00:02:11.379 Version: 1.5.0 00:02:11.379 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:11.379 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:11.379 Build type: native build 00:02:11.379 Project name: xnvme 00:02:11.379 Project version: 0.7.3 00:02:11.379 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.379 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.379 Host machine cpu family: x86_64 00:02:11.379 Host machine cpu: x86_64 00:02:11.379 Message: host_machine.system: linux 00:02:11.379 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:11.379 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:11.379 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:11.379 Run-time dependency threads found: YES 00:02:11.379 Has header "setupapi.h" : NO 00:02:11.379 Has header "linux/blkzoned.h" : YES 00:02:11.379 Has header "linux/blkzoned.h" : YES (cached) 00:02:11.379 Has header "libaio.h" : YES 00:02:11.379 Library aio found: YES 00:02:11.379 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.379 Run-time dependency liburing found: YES 2.2 00:02:11.380 Dependency libvfn skipped: feature with-libvfn disabled 00:02:11.380 Run-time dependency appleframeworks found: NO (tried framework) 00:02:11.380 Run-time dependency appleframeworks found: NO (tried framework) 00:02:11.380 Configuring xnvme_config.h using configuration 00:02:11.380 Configuring xnvme.spec using configuration 00:02:11.380 Run-time dependency bash-completion found: YES 2.11 00:02:11.380 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:11.380 Program cp found: YES (/usr/bin/cp) 00:02:11.380 Has header "winsock2.h" : NO 00:02:11.380 Has header "dbghelp.h" : NO 00:02:11.380 Library rpcrt4 found: NO 00:02:11.380 Library rt found: YES 00:02:11.380 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:11.380 Found CMake: /usr/bin/cmake (3.27.7) 00:02:11.380 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:11.380 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:11.380 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:11.380 Build targets in project: 32 00:02:11.380 00:02:11.380 xnvme 0.7.3 00:02:11.380 00:02:11.380 User defined options 00:02:11.380 with-libaio : enabled 00:02:11.380 with-liburing: enabled 00:02:11.380 with-libvfn : disabled 00:02:11.380 with-spdk : false 00:02:11.380 00:02:11.380 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.380 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:11.380 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:11.380 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:11.380 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:11.380 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:11.380 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:11.380 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:11.380 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:11.380 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:11.380 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:11.380 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:11.380 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:11.380 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:11.380 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:11.642 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:11.642 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:11.642 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:11.642 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:11.642 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:11.642 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:11.642 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:11.642 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:11.642 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:11.642 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:11.642 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:11.642 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:11.642 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:11.642 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:11.642 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:11.642 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:11.642 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:11.642 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:11.642 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:11.642 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:11.642 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:11.642 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:11.642 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:11.642 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:11.642 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:11.642 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:11.642 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:11.642 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:11.642 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:11.642 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:11.642 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:11.642 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:11.642 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:11.642 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:11.642 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:11.642 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:11.642 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:11.642 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:11.903 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:11.903 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:11.903 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:11.903 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:11.903 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:11.903 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:11.903 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:11.903 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:11.903 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:11.903 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:11.903 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:11.903 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:11.903 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:11.903 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:11.903 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:11.903 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:11.903 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:11.903 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:11.903 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:11.903 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:11.903 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:11.903 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:12.162 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:12.162 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:12.162 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:12.162 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:12.162 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:12.162 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:12.162 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:12.162 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.162 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:12.162 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:12.162 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:12.162 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:12.162 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:12.162 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:12.162 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:12.162 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:12.162 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:12.162 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:12.162 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:12.162 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:12.162 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:12.162 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:12.421 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:12.421 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:12.421 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:12.421 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:12.421 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:12.421 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:12.421 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:12.421 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:12.421 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:12.421 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:12.421 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:12.421 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:12.421 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:12.421 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:12.421 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:12.421 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:12.421 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:12.421 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:12.421 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:12.421 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:12.421 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:12.421 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:12.421 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:12.421 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:12.421 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:12.421 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:12.421 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:12.421 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:12.421 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:12.421 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:12.421 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:12.421 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:12.421 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:12.421 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:12.421 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:12.421 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:12.680 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:12.680 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:12.680 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:12.680 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:12.680 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:12.680 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:12.680 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:12.680 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:12.680 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:12.680 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:12.680 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:12.680 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:12.680 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:12.680 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:12.680 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:12.680 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:12.680 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:12.680 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:12.680 [150/203] Linking target lib/libxnvme.so 00:02:12.680 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:12.939 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:12.939 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:12.939 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:12.939 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:12.939 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:12.939 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:12.939 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:12.939 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:12.939 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:12.939 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:12.939 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:12.939 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:12.939 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:12.939 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:12.939 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:12.939 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:12.939 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:12.939 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:13.198 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:13.198 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:13.198 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:13.198 [173/203] Linking static target lib/libxnvme.a 00:02:13.198 [174/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:13.198 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:13.198 [176/203] Linking target tests/xnvme_tests_cli 00:02:13.198 [177/203] Linking target tests/xnvme_tests_scc 00:02:13.198 [178/203] Linking target tests/xnvme_tests_enum 00:02:13.198 [179/203] Linking target tests/xnvme_tests_buf 00:02:13.198 [180/203] Linking target tests/xnvme_tests_lblk 00:02:13.198 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:13.198 [182/203] Linking target tests/xnvme_tests_ioworker 00:02:13.198 [183/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:13.198 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:02:13.198 [185/203] Linking target tests/xnvme_tests_znd_state 00:02:13.198 [186/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:13.198 [187/203] Linking target tests/xnvme_tests_znd_append 00:02:13.198 [188/203] Linking target tests/xnvme_tests_kvs 00:02:13.198 [189/203] Linking target tools/lblk 00:02:13.198 [190/203] Linking target tools/kvs 00:02:13.198 [191/203] Linking target tests/xnvme_tests_map 00:02:13.198 [192/203] Linking target tools/xdd 00:02:13.198 [193/203] Linking target tools/xnvme 00:02:13.198 [194/203] Linking target tools/xnvme_file 00:02:13.198 [195/203] Linking target examples/xnvme_enum 00:02:13.198 [196/203] Linking target tools/zoned 00:02:13.198 [197/203] Linking target examples/xnvme_dev 00:02:13.198 [198/203] Linking target examples/xnvme_hello 00:02:13.198 [199/203] Linking target examples/xnvme_io_async 00:02:13.198 [200/203] Linking target examples/zoned_io_sync 00:02:13.198 [201/203] Linking target examples/xnvme_single_async 00:02:13.198 [202/203] Linking target examples/xnvme_single_sync 00:02:13.198 [203/203] Linking target examples/zoned_io_async 00:02:13.198 INFO: autodetecting backend as ninja 00:02:13.198 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:13.198 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:18.460 The Meson build system 00:02:18.460 Version: 1.5.0 00:02:18.460 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:18.460 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:18.460 Build type: native build 00:02:18.460 Program cat found: YES (/usr/bin/cat) 00:02:18.460 Project name: DPDK 00:02:18.460 Project version: 23.11.0 00:02:18.460 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.460 C linker for the host machine: cc ld.bfd 2.40-14 00:02:18.460 Host machine cpu family: x86_64 00:02:18.460 Host machine cpu: x86_64 00:02:18.460 Message: ## Building in Developer Mode ## 00:02:18.460 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:18.460 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:18.460 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.460 Program python3 found: YES (/usr/bin/python3) 00:02:18.460 Program cat found: YES (/usr/bin/cat) 00:02:18.460 Compiler for C supports arguments -march=native: YES 00:02:18.460 Checking for size of "void *" : 8 00:02:18.460 Checking for size of "void *" : 8 (cached) 00:02:18.460 Library m found: YES 00:02:18.460 Library numa found: YES 00:02:18.460 Has header "numaif.h" : YES 00:02:18.460 Library fdt found: NO 00:02:18.460 Library execinfo found: NO 00:02:18.460 Has header "execinfo.h" : YES 00:02:18.460 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.460 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.460 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.460 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.460 Run-time dependency openssl found: YES 3.1.1 00:02:18.460 Run-time dependency libpcap found: YES 1.10.4 00:02:18.461 Has header "pcap.h" with dependency libpcap: YES 00:02:18.461 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.461 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.461 Compiler for C supports arguments -Wformat: YES 00:02:18.461 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.461 Compiler for C supports arguments -Wformat-security: NO 00:02:18.461 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.461 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.461 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.461 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.461 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.461 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.461 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.461 Compiler for C supports arguments -Wundef: YES 00:02:18.461 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.461 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.461 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.461 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.461 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.461 Program objdump found: YES (/usr/bin/objdump) 00:02:18.461 Compiler for C supports arguments -mavx512f: YES 00:02:18.461 Checking if "AVX512 checking" compiles: YES 00:02:18.461 Fetching value of define "__SSE4_2__" : 1 00:02:18.461 Fetching value of define "__AES__" : 1 00:02:18.461 Fetching value of define "__AVX__" : 1 00:02:18.461 Fetching value of define "__AVX2__" : 1 00:02:18.461 Fetching value of define "__AVX512BW__" : 1 00:02:18.461 Fetching value of define "__AVX512CD__" : 1 00:02:18.461 Fetching value of define "__AVX512DQ__" : 1 00:02:18.461 Fetching value of define "__AVX512F__" : 1 00:02:18.461 Fetching value of define "__AVX512VL__" : 1 00:02:18.461 Fetching value of define "__PCLMUL__" : 1 00:02:18.461 Fetching value of define "__RDRND__" : 1 00:02:18.461 Fetching value of define "__RDSEED__" : 1 00:02:18.461 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:18.461 Fetching value of define "__znver1__" : (undefined) 00:02:18.461 Fetching value of define "__znver2__" : (undefined) 00:02:18.461 Fetching value of define "__znver3__" : (undefined) 00:02:18.461 Fetching value of define "__znver4__" : (undefined) 00:02:18.461 Library asan found: YES 00:02:18.461 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.461 Message: lib/log: Defining dependency "log" 00:02:18.461 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.461 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.461 Library rt found: YES 00:02:18.461 Checking for function "getentropy" : NO 00:02:18.461 Message: lib/eal: Defining dependency "eal" 00:02:18.461 Message: lib/ring: Defining dependency "ring" 00:02:18.461 Message: lib/rcu: Defining dependency "rcu" 00:02:18.461 Message: lib/mempool: Defining dependency "mempool" 00:02:18.461 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.461 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.461 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.461 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:18.461 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:18.461 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:18.461 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:18.461 Compiler for C supports arguments -mpclmul: YES 00:02:18.461 Compiler for C supports arguments -maes: YES 00:02:18.461 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.461 Compiler for C supports arguments -mavx512bw: YES 00:02:18.461 Compiler for C supports arguments -mavx512dq: YES 00:02:18.461 Compiler for C supports arguments -mavx512vl: YES 00:02:18.461 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.461 Compiler for C supports arguments -mavx2: YES 00:02:18.461 Compiler for C supports arguments -mavx: YES 00:02:18.461 Message: lib/net: Defining dependency "net" 00:02:18.461 Message: lib/meter: Defining dependency "meter" 00:02:18.461 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.461 Message: lib/pci: Defining dependency "pci" 00:02:18.461 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.461 Message: lib/hash: Defining dependency "hash" 00:02:18.461 Message: lib/timer: Defining dependency "timer" 00:02:18.461 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.461 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.461 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.461 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.461 Message: lib/power: Defining dependency "power" 00:02:18.461 Message: lib/reorder: Defining dependency "reorder" 00:02:18.461 Message: lib/security: Defining dependency "security" 00:02:18.461 Has header "linux/userfaultfd.h" : YES 00:02:18.461 Has header "linux/vduse.h" : YES 00:02:18.461 Message: lib/vhost: Defining dependency "vhost" 00:02:18.461 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.461 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.461 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.461 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.461 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:18.461 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:18.461 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:18.461 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:18.461 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:18.461 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:18.461 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:18.461 Configuring doxy-api-html.conf using configuration 00:02:18.461 Configuring doxy-api-man.conf using configuration 00:02:18.461 Program mandb found: YES (/usr/bin/mandb) 00:02:18.461 Program sphinx-build found: NO 00:02:18.461 Configuring rte_build_config.h using configuration 00:02:18.461 Message: 00:02:18.461 ================= 00:02:18.461 Applications Enabled 00:02:18.461 ================= 00:02:18.461 00:02:18.461 apps: 00:02:18.461 00:02:18.461 00:02:18.461 Message: 00:02:18.461 ================= 00:02:18.461 Libraries Enabled 00:02:18.461 ================= 00:02:18.461 00:02:18.461 libs: 00:02:18.461 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:18.461 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:18.461 cryptodev, dmadev, power, reorder, security, vhost, 00:02:18.461 00:02:18.461 Message: 00:02:18.461 =============== 00:02:18.461 Drivers Enabled 00:02:18.461 =============== 00:02:18.461 00:02:18.461 common: 00:02:18.461 00:02:18.461 bus: 00:02:18.461 pci, vdev, 00:02:18.461 mempool: 00:02:18.461 ring, 00:02:18.461 dma: 00:02:18.461 00:02:18.461 net: 00:02:18.461 00:02:18.461 crypto: 00:02:18.461 00:02:18.461 compress: 00:02:18.461 00:02:18.461 vdpa: 00:02:18.461 00:02:18.461 00:02:18.461 Message: 00:02:18.461 ================= 00:02:18.461 Content Skipped 00:02:18.461 ================= 00:02:18.461 00:02:18.461 apps: 00:02:18.461 dumpcap: explicitly disabled via build config 00:02:18.461 graph: explicitly disabled via build config 00:02:18.461 pdump: explicitly disabled via build config 00:02:18.461 proc-info: explicitly disabled via build config 00:02:18.461 test-acl: explicitly disabled via build config 00:02:18.461 test-bbdev: explicitly disabled via build config 00:02:18.461 test-cmdline: explicitly disabled via build config 00:02:18.461 test-compress-perf: explicitly disabled via build config 00:02:18.461 test-crypto-perf: explicitly disabled via build config 00:02:18.461 test-dma-perf: explicitly disabled via build config 00:02:18.461 test-eventdev: explicitly disabled via build config 00:02:18.461 test-fib: explicitly disabled via build config 00:02:18.461 test-flow-perf: explicitly disabled via build config 00:02:18.461 test-gpudev: explicitly disabled via build config 00:02:18.461 test-mldev: explicitly disabled via build config 00:02:18.461 test-pipeline: explicitly disabled via build config 00:02:18.461 test-pmd: explicitly disabled via build config 00:02:18.461 test-regex: explicitly disabled via build config 00:02:18.461 test-sad: explicitly disabled via build config 00:02:18.461 test-security-perf: explicitly disabled via build config 00:02:18.461 00:02:18.461 libs: 00:02:18.461 metrics: explicitly disabled via build config 00:02:18.461 acl: explicitly disabled via build config 00:02:18.461 bbdev: explicitly disabled via build config 00:02:18.461 bitratestats: explicitly disabled via build config 00:02:18.461 bpf: explicitly disabled via build config 00:02:18.461 cfgfile: explicitly disabled via build config 00:02:18.461 distributor: explicitly disabled via build config 00:02:18.461 efd: explicitly disabled via build config 00:02:18.461 eventdev: explicitly disabled via build config 00:02:18.461 dispatcher: explicitly disabled via build config 00:02:18.461 gpudev: explicitly disabled via build config 00:02:18.461 gro: explicitly disabled via build config 00:02:18.461 gso: explicitly disabled via build config 00:02:18.461 ip_frag: explicitly disabled via build config 00:02:18.461 jobstats: explicitly disabled via build config 00:02:18.461 latencystats: explicitly disabled via build config 00:02:18.461 lpm: explicitly disabled via build config 00:02:18.461 member: explicitly disabled via build config 00:02:18.461 pcapng: explicitly disabled via build config 00:02:18.461 rawdev: explicitly disabled via build config 00:02:18.461 regexdev: explicitly disabled via build config 00:02:18.461 mldev: explicitly disabled via build config 00:02:18.461 rib: explicitly disabled via build config 00:02:18.461 sched: explicitly disabled via build config 00:02:18.461 stack: explicitly disabled via build config 00:02:18.461 ipsec: explicitly disabled via build config 00:02:18.461 pdcp: explicitly disabled via build config 00:02:18.461 fib: explicitly disabled via build config 00:02:18.461 port: explicitly disabled via build config 00:02:18.461 pdump: explicitly disabled via build config 00:02:18.461 table: explicitly disabled via build config 00:02:18.461 pipeline: explicitly disabled via build config 00:02:18.461 graph: explicitly disabled via build config 00:02:18.461 node: explicitly disabled via build config 00:02:18.461 00:02:18.461 drivers: 00:02:18.461 common/cpt: not in enabled drivers build config 00:02:18.461 common/dpaax: not in enabled drivers build config 00:02:18.462 common/iavf: not in enabled drivers build config 00:02:18.462 common/idpf: not in enabled drivers build config 00:02:18.462 common/mvep: not in enabled drivers build config 00:02:18.462 common/octeontx: not in enabled drivers build config 00:02:18.462 bus/auxiliary: not in enabled drivers build config 00:02:18.462 bus/cdx: not in enabled drivers build config 00:02:18.462 bus/dpaa: not in enabled drivers build config 00:02:18.462 bus/fslmc: not in enabled drivers build config 00:02:18.462 bus/ifpga: not in enabled drivers build config 00:02:18.462 bus/platform: not in enabled drivers build config 00:02:18.462 bus/vmbus: not in enabled drivers build config 00:02:18.462 common/cnxk: not in enabled drivers build config 00:02:18.462 common/mlx5: not in enabled drivers build config 00:02:18.462 common/nfp: not in enabled drivers build config 00:02:18.462 common/qat: not in enabled drivers build config 00:02:18.462 common/sfc_efx: not in enabled drivers build config 00:02:18.462 mempool/bucket: not in enabled drivers build config 00:02:18.462 mempool/cnxk: not in enabled drivers build config 00:02:18.462 mempool/dpaa: not in enabled drivers build config 00:02:18.462 mempool/dpaa2: not in enabled drivers build config 00:02:18.462 mempool/octeontx: not in enabled drivers build config 00:02:18.462 mempool/stack: not in enabled drivers build config 00:02:18.462 dma/cnxk: not in enabled drivers build config 00:02:18.462 dma/dpaa: not in enabled drivers build config 00:02:18.462 dma/dpaa2: not in enabled drivers build config 00:02:18.462 dma/hisilicon: not in enabled drivers build config 00:02:18.462 dma/idxd: not in enabled drivers build config 00:02:18.462 dma/ioat: not in enabled drivers build config 00:02:18.462 dma/skeleton: not in enabled drivers build config 00:02:18.462 net/af_packet: not in enabled drivers build config 00:02:18.462 net/af_xdp: not in enabled drivers build config 00:02:18.462 net/ark: not in enabled drivers build config 00:02:18.462 net/atlantic: not in enabled drivers build config 00:02:18.462 net/avp: not in enabled drivers build config 00:02:18.462 net/axgbe: not in enabled drivers build config 00:02:18.462 net/bnx2x: not in enabled drivers build config 00:02:18.462 net/bnxt: not in enabled drivers build config 00:02:18.462 net/bonding: not in enabled drivers build config 00:02:18.462 net/cnxk: not in enabled drivers build config 00:02:18.462 net/cpfl: not in enabled drivers build config 00:02:18.462 net/cxgbe: not in enabled drivers build config 00:02:18.462 net/dpaa: not in enabled drivers build config 00:02:18.462 net/dpaa2: not in enabled drivers build config 00:02:18.462 net/e1000: not in enabled drivers build config 00:02:18.462 net/ena: not in enabled drivers build config 00:02:18.462 net/enetc: not in enabled drivers build config 00:02:18.462 net/enetfec: not in enabled drivers build config 00:02:18.462 net/enic: not in enabled drivers build config 00:02:18.462 net/failsafe: not in enabled drivers build config 00:02:18.462 net/fm10k: not in enabled drivers build config 00:02:18.462 net/gve: not in enabled drivers build config 00:02:18.462 net/hinic: not in enabled drivers build config 00:02:18.462 net/hns3: not in enabled drivers build config 00:02:18.462 net/i40e: not in enabled drivers build config 00:02:18.462 net/iavf: not in enabled drivers build config 00:02:18.462 net/ice: not in enabled drivers build config 00:02:18.462 net/idpf: not in enabled drivers build config 00:02:18.462 net/igc: not in enabled drivers build config 00:02:18.462 net/ionic: not in enabled drivers build config 00:02:18.462 net/ipn3ke: not in enabled drivers build config 00:02:18.462 net/ixgbe: not in enabled drivers build config 00:02:18.462 net/mana: not in enabled drivers build config 00:02:18.462 net/memif: not in enabled drivers build config 00:02:18.462 net/mlx4: not in enabled drivers build config 00:02:18.462 net/mlx5: not in enabled drivers build config 00:02:18.462 net/mvneta: not in enabled drivers build config 00:02:18.462 net/mvpp2: not in enabled drivers build config 00:02:18.462 net/netvsc: not in enabled drivers build config 00:02:18.462 net/nfb: not in enabled drivers build config 00:02:18.462 net/nfp: not in enabled drivers build config 00:02:18.462 net/ngbe: not in enabled drivers build config 00:02:18.462 net/null: not in enabled drivers build config 00:02:18.462 net/octeontx: not in enabled drivers build config 00:02:18.462 net/octeon_ep: not in enabled drivers build config 00:02:18.462 net/pcap: not in enabled drivers build config 00:02:18.462 net/pfe: not in enabled drivers build config 00:02:18.462 net/qede: not in enabled drivers build config 00:02:18.462 net/ring: not in enabled drivers build config 00:02:18.462 net/sfc: not in enabled drivers build config 00:02:18.462 net/softnic: not in enabled drivers build config 00:02:18.462 net/tap: not in enabled drivers build config 00:02:18.462 net/thunderx: not in enabled drivers build config 00:02:18.462 net/txgbe: not in enabled drivers build config 00:02:18.462 net/vdev_netvsc: not in enabled drivers build config 00:02:18.462 net/vhost: not in enabled drivers build config 00:02:18.462 net/virtio: not in enabled drivers build config 00:02:18.462 net/vmxnet3: not in enabled drivers build config 00:02:18.462 raw/*: missing internal dependency, "rawdev" 00:02:18.462 crypto/armv8: not in enabled drivers build config 00:02:18.462 crypto/bcmfs: not in enabled drivers build config 00:02:18.462 crypto/caam_jr: not in enabled drivers build config 00:02:18.462 crypto/ccp: not in enabled drivers build config 00:02:18.462 crypto/cnxk: not in enabled drivers build config 00:02:18.462 crypto/dpaa_sec: not in enabled drivers build config 00:02:18.462 crypto/dpaa2_sec: not in enabled drivers build config 00:02:18.462 crypto/ipsec_mb: not in enabled drivers build config 00:02:18.462 crypto/mlx5: not in enabled drivers build config 00:02:18.462 crypto/mvsam: not in enabled drivers build config 00:02:18.462 crypto/nitrox: not in enabled drivers build config 00:02:18.462 crypto/null: not in enabled drivers build config 00:02:18.462 crypto/octeontx: not in enabled drivers build config 00:02:18.462 crypto/openssl: not in enabled drivers build config 00:02:18.462 crypto/scheduler: not in enabled drivers build config 00:02:18.462 crypto/uadk: not in enabled drivers build config 00:02:18.462 crypto/virtio: not in enabled drivers build config 00:02:18.462 compress/isal: not in enabled drivers build config 00:02:18.462 compress/mlx5: not in enabled drivers build config 00:02:18.462 compress/octeontx: not in enabled drivers build config 00:02:18.462 compress/zlib: not in enabled drivers build config 00:02:18.462 regex/*: missing internal dependency, "regexdev" 00:02:18.462 ml/*: missing internal dependency, "mldev" 00:02:18.462 vdpa/ifc: not in enabled drivers build config 00:02:18.462 vdpa/mlx5: not in enabled drivers build config 00:02:18.462 vdpa/nfp: not in enabled drivers build config 00:02:18.462 vdpa/sfc: not in enabled drivers build config 00:02:18.462 event/*: missing internal dependency, "eventdev" 00:02:18.462 baseband/*: missing internal dependency, "bbdev" 00:02:18.462 gpu/*: missing internal dependency, "gpudev" 00:02:18.462 00:02:18.462 00:02:18.462 Build targets in project: 84 00:02:18.462 00:02:18.462 DPDK 23.11.0 00:02:18.462 00:02:18.462 User defined options 00:02:18.462 buildtype : debug 00:02:18.462 default_library : shared 00:02:18.462 libdir : lib 00:02:18.462 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:18.462 b_sanitize : address 00:02:18.462 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:18.462 c_link_args : 00:02:18.462 cpu_instruction_set: native 00:02:18.462 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:18.462 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:18.462 enable_docs : false 00:02:18.462 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:18.462 enable_kmods : false 00:02:18.462 tests : false 00:02:18.462 00:02:18.462 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.720 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:18.720 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:18.720 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.720 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:18.978 [4/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:18.978 [5/264] Linking static target lib/librte_kvargs.a 00:02:18.978 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.978 [7/264] Linking static target lib/librte_log.a 00:02:18.978 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:18.978 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.978 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.235 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.236 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.236 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.236 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.236 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.236 [16/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.236 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.236 [18/264] Linking static target lib/librte_telemetry.a 00:02:19.236 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.493 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.493 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.493 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.493 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.493 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.753 [25/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.753 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.753 [27/264] Linking target lib/librte_log.so.24.0 00:02:19.753 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.753 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.753 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.753 [31/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:20.018 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.018 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.018 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.018 [35/264] Linking target lib/librte_kvargs.so.24.0 00:02:20.018 [36/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.018 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:20.018 [38/264] Linking target lib/librte_telemetry.so.24.0 00:02:20.018 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.018 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:20.018 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.018 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:20.018 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:20.018 [44/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:20.018 [45/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:20.277 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.277 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.277 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.277 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.277 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.277 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.535 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.535 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.535 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.535 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.535 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.535 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.535 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.535 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.535 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.535 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.793 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.793 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.793 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.793 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.793 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.793 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.793 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.793 [69/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.050 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.050 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.050 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.050 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.050 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.050 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.050 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.050 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.308 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.308 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.308 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.308 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.308 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.566 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.566 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.566 [85/264] Linking static target lib/librte_ring.a 00:02:21.566 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.566 [87/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.566 [88/264] Linking static target lib/librte_eal.a 00:02:21.566 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.824 [90/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.824 [91/264] Linking static target lib/librte_rcu.a 00:02:21.824 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.824 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.824 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.824 [95/264] Linking static target lib/librte_mempool.a 00:02:22.083 [96/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.083 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.083 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.083 [99/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.083 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.341 [101/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.341 [102/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.341 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.341 [104/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.341 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:22.341 [106/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.341 [107/264] Linking static target lib/librte_mbuf.a 00:02:22.341 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.341 [109/264] Linking static target lib/librte_net.a 00:02:22.341 [110/264] Linking static target lib/librte_meter.a 00:02:22.600 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.600 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.600 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.600 [114/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.600 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.857 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.857 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.857 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:23.116 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:23.116 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.116 [121/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.375 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:23.375 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.375 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:23.375 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.375 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.375 [127/264] Linking static target lib/librte_pci.a 00:02:23.375 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.375 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.375 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.375 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.375 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.633 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.633 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:23.633 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.633 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.633 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.633 [138/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.633 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.633 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.633 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.634 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.634 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.634 [144/264] Linking static target lib/librte_cmdline.a 00:02:23.892 [145/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.892 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.892 [147/264] Linking static target lib/librte_timer.a 00:02:23.892 [148/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.892 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.151 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.151 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.151 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.151 [153/264] Linking static target lib/librte_compressdev.a 00:02:24.151 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.409 [155/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.409 [156/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.409 [157/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.409 [158/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.409 [159/264] Linking static target lib/librte_hash.a 00:02:24.409 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.409 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.409 [162/264] Linking static target lib/librte_dmadev.a 00:02:24.409 [163/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.668 [164/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.668 [165/264] Linking static target lib/librte_ethdev.a 00:02:24.668 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.668 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.668 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.927 [169/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.927 [170/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.927 [171/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.927 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.927 [173/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.185 [174/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.185 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.185 [176/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.185 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.185 [178/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.185 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.185 [180/264] Linking static target lib/librte_power.a 00:02:25.444 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.444 [182/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.444 [183/264] Linking static target lib/librte_reorder.a 00:02:25.444 [184/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.444 [185/264] Linking static target lib/librte_cryptodev.a 00:02:25.444 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.444 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.702 [188/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.702 [189/264] Linking static target lib/librte_security.a 00:02:25.702 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.702 [191/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.961 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.961 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.961 [194/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.961 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.961 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.220 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.220 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.220 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.220 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.478 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.478 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.478 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.478 [204/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.478 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.478 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.736 [207/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.736 [208/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.736 [209/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.736 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.736 [211/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.736 [212/264] Linking static target drivers/librte_bus_vdev.a 00:02:26.736 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.736 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.736 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:26.736 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.736 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.736 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.995 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:26.995 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.995 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.995 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:26.995 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.562 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.495 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.495 [226/264] Linking target lib/librte_eal.so.24.0 00:02:28.495 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:28.495 [228/264] Linking target lib/librte_ring.so.24.0 00:02:28.495 [229/264] Linking target lib/librte_dmadev.so.24.0 00:02:28.495 [230/264] Linking target lib/librte_pci.so.24.0 00:02:28.495 [231/264] Linking target lib/librte_timer.so.24.0 00:02:28.495 [232/264] Linking target lib/librte_meter.so.24.0 00:02:28.495 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:28.495 [234/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:28.496 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:28.753 [236/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:28.753 [237/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:28.753 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:28.753 [239/264] Linking target lib/librte_mempool.so.24.0 00:02:28.753 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:28.753 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:28.753 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:28.753 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:28.753 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:28.753 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:29.011 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:29.011 [247/264] Linking target lib/librte_net.so.24.0 00:02:29.011 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:29.011 [249/264] Linking target lib/librte_reorder.so.24.0 00:02:29.011 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:29.011 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:29.011 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:29.011 [253/264] Linking target lib/librte_cmdline.so.24.0 00:02:29.011 [254/264] Linking target lib/librte_hash.so.24.0 00:02:29.011 [255/264] Linking target lib/librte_security.so.24.0 00:02:29.269 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:29.528 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.787 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:29.787 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:29.787 [260/264] Linking target lib/librte_power.so.24.0 00:02:30.046 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.046 [262/264] Linking static target lib/librte_vhost.a 00:02:31.422 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.422 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:31.422 INFO: autodetecting backend as ninja 00:02:31.422 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:31.987 CC lib/log/log.o 00:02:31.987 CC lib/log/log_flags.o 00:02:31.987 CC lib/log/log_deprecated.o 00:02:31.987 CC lib/ut_mock/mock.o 00:02:31.987 CC lib/ut/ut.o 00:02:32.245 LIB libspdk_ut_mock.a 00:02:32.245 LIB libspdk_log.a 00:02:32.245 SO libspdk_ut_mock.so.5.0 00:02:32.245 LIB libspdk_ut.a 00:02:32.245 SO libspdk_ut.so.1.0 00:02:32.245 SO libspdk_log.so.6.1 00:02:32.245 SYMLINK libspdk_ut_mock.so 00:02:32.245 SYMLINK libspdk_ut.so 00:02:32.245 SYMLINK libspdk_log.so 00:02:32.502 CC lib/ioat/ioat.o 00:02:32.502 CC lib/util/bit_array.o 00:02:32.502 CC lib/util/base64.o 00:02:32.502 CC lib/util/cpuset.o 00:02:32.502 CC lib/util/crc16.o 00:02:32.502 CC lib/util/crc32.o 00:02:32.502 CC lib/util/crc32c.o 00:02:32.502 CC lib/dma/dma.o 00:02:32.502 CXX lib/trace_parser/trace.o 00:02:32.502 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.502 CC lib/util/crc32_ieee.o 00:02:32.502 CC lib/util/crc64.o 00:02:32.502 LIB libspdk_dma.a 00:02:32.502 SO libspdk_dma.so.3.0 00:02:32.502 CC lib/util/dif.o 00:02:32.502 CC lib/vfio_user/host/vfio_user.o 00:02:32.502 CC lib/util/fd.o 00:02:32.502 CC lib/util/file.o 00:02:32.502 CC lib/util/hexlify.o 00:02:32.502 CC lib/util/iov.o 00:02:32.502 SYMLINK libspdk_dma.so 00:02:32.502 CC lib/util/math.o 00:02:32.761 LIB libspdk_ioat.a 00:02:32.761 CC lib/util/pipe.o 00:02:32.761 SO libspdk_ioat.so.6.0 00:02:32.761 CC lib/util/strerror_tls.o 00:02:32.761 CC lib/util/string.o 00:02:32.761 CC lib/util/uuid.o 00:02:32.761 CC lib/util/fd_group.o 00:02:32.761 SYMLINK libspdk_ioat.so 00:02:32.761 CC lib/util/xor.o 00:02:32.761 CC lib/util/zipf.o 00:02:32.761 LIB libspdk_vfio_user.a 00:02:32.761 SO libspdk_vfio_user.so.4.0 00:02:32.761 SYMLINK libspdk_vfio_user.so 00:02:33.021 LIB libspdk_util.a 00:02:33.280 SO libspdk_util.so.8.0 00:02:33.280 LIB libspdk_trace_parser.a 00:02:33.280 SO libspdk_trace_parser.so.4.0 00:02:33.280 SYMLINK libspdk_util.so 00:02:33.280 SYMLINK libspdk_trace_parser.so 00:02:33.280 CC lib/rdma/common.o 00:02:33.280 CC lib/rdma/rdma_verbs.o 00:02:33.280 CC lib/vmd/vmd.o 00:02:33.280 CC lib/vmd/led.o 00:02:33.280 CC lib/conf/conf.o 00:02:33.280 CC lib/idxd/idxd_user.o 00:02:33.280 CC lib/idxd/idxd.o 00:02:33.280 CC lib/env_dpdk/memory.o 00:02:33.280 CC lib/env_dpdk/env.o 00:02:33.280 CC lib/json/json_parse.o 00:02:33.538 CC lib/env_dpdk/pci.o 00:02:33.538 CC lib/env_dpdk/init.o 00:02:33.538 LIB libspdk_conf.a 00:02:33.538 CC lib/json/json_util.o 00:02:33.538 SO libspdk_conf.so.5.0 00:02:33.538 SYMLINK libspdk_conf.so 00:02:33.538 CC lib/env_dpdk/threads.o 00:02:33.538 CC lib/env_dpdk/pci_ioat.o 00:02:33.538 LIB libspdk_rdma.a 00:02:33.796 SO libspdk_rdma.so.5.0 00:02:33.796 CC lib/env_dpdk/pci_virtio.o 00:02:33.796 CC lib/env_dpdk/pci_vmd.o 00:02:33.796 SYMLINK libspdk_rdma.so 00:02:33.796 CC lib/env_dpdk/pci_idxd.o 00:02:33.796 CC lib/env_dpdk/pci_event.o 00:02:33.796 CC lib/json/json_write.o 00:02:33.796 CC lib/idxd/idxd_kernel.o 00:02:33.796 CC lib/env_dpdk/sigbus_handler.o 00:02:33.796 CC lib/env_dpdk/pci_dpdk.o 00:02:33.796 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.796 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.796 LIB libspdk_vmd.a 00:02:33.796 LIB libspdk_idxd.a 00:02:33.796 SO libspdk_vmd.so.5.0 00:02:34.055 SO libspdk_idxd.so.11.0 00:02:34.055 SYMLINK libspdk_vmd.so 00:02:34.055 SYMLINK libspdk_idxd.so 00:02:34.055 LIB libspdk_json.a 00:02:34.055 SO libspdk_json.so.5.1 00:02:34.055 SYMLINK libspdk_json.so 00:02:34.316 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.316 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.316 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.316 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.578 LIB libspdk_jsonrpc.a 00:02:34.578 SO libspdk_jsonrpc.so.5.1 00:02:34.578 SYMLINK libspdk_jsonrpc.so 00:02:34.838 LIB libspdk_env_dpdk.a 00:02:34.838 CC lib/rpc/rpc.o 00:02:34.838 SO libspdk_env_dpdk.so.13.0 00:02:34.838 SYMLINK libspdk_env_dpdk.so 00:02:34.838 LIB libspdk_rpc.a 00:02:34.838 SO libspdk_rpc.so.5.0 00:02:35.099 SYMLINK libspdk_rpc.so 00:02:35.099 CC lib/sock/sock.o 00:02:35.099 CC lib/sock/sock_rpc.o 00:02:35.100 CC lib/notify/notify.o 00:02:35.100 CC lib/notify/notify_rpc.o 00:02:35.100 CC lib/trace/trace.o 00:02:35.100 CC lib/trace/trace_rpc.o 00:02:35.100 CC lib/trace/trace_flags.o 00:02:35.361 LIB libspdk_notify.a 00:02:35.361 SO libspdk_notify.so.5.0 00:02:35.361 LIB libspdk_trace.a 00:02:35.361 SO libspdk_trace.so.9.0 00:02:35.361 SYMLINK libspdk_notify.so 00:02:35.361 SYMLINK libspdk_trace.so 00:02:35.620 LIB libspdk_sock.a 00:02:35.620 SO libspdk_sock.so.8.0 00:02:35.620 CC lib/thread/iobuf.o 00:02:35.620 CC lib/thread/thread.o 00:02:35.620 SYMLINK libspdk_sock.so 00:02:35.620 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:35.620 CC lib/nvme/nvme_ns_cmd.o 00:02:35.620 CC lib/nvme/nvme_ctrlr.o 00:02:35.620 CC lib/nvme/nvme_fabric.o 00:02:35.620 CC lib/nvme/nvme_pcie_common.o 00:02:35.620 CC lib/nvme/nvme_qpair.o 00:02:35.620 CC lib/nvme/nvme_pcie.o 00:02:35.620 CC lib/nvme/nvme_ns.o 00:02:35.879 CC lib/nvme/nvme.o 00:02:36.139 CC lib/nvme/nvme_quirks.o 00:02:36.398 CC lib/nvme/nvme_transport.o 00:02:36.398 CC lib/nvme/nvme_discovery.o 00:02:36.398 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.398 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.398 CC lib/nvme/nvme_tcp.o 00:02:36.656 CC lib/nvme/nvme_opal.o 00:02:36.656 CC lib/nvme/nvme_io_msg.o 00:02:36.656 CC lib/nvme/nvme_poll_group.o 00:02:36.923 CC lib/nvme/nvme_zns.o 00:02:36.923 CC lib/nvme/nvme_cuse.o 00:02:36.923 CC lib/nvme/nvme_vfio_user.o 00:02:36.923 LIB libspdk_thread.a 00:02:36.923 CC lib/nvme/nvme_rdma.o 00:02:36.923 SO libspdk_thread.so.9.0 00:02:37.186 SYMLINK libspdk_thread.so 00:02:37.186 CC lib/accel/accel.o 00:02:37.186 CC lib/blob/blobstore.o 00:02:37.186 CC lib/accel/accel_rpc.o 00:02:37.186 CC lib/blob/request.o 00:02:37.186 CC lib/accel/accel_sw.o 00:02:37.453 CC lib/blob/zeroes.o 00:02:37.453 CC lib/blob/blob_bs_dev.o 00:02:37.453 CC lib/init/json_config.o 00:02:37.453 CC lib/init/subsystem.o 00:02:37.453 CC lib/init/subsystem_rpc.o 00:02:37.712 CC lib/virtio/virtio.o 00:02:37.712 CC lib/init/rpc.o 00:02:37.712 CC lib/virtio/virtio_vhost_user.o 00:02:37.712 CC lib/virtio/virtio_vfio_user.o 00:02:37.712 CC lib/virtio/virtio_pci.o 00:02:37.712 LIB libspdk_init.a 00:02:37.712 SO libspdk_init.so.4.0 00:02:37.712 SYMLINK libspdk_init.so 00:02:37.970 CC lib/event/app.o 00:02:37.970 CC lib/event/reactor.o 00:02:37.970 CC lib/event/app_rpc.o 00:02:37.970 LIB libspdk_accel.a 00:02:37.970 CC lib/event/log_rpc.o 00:02:37.970 CC lib/event/scheduler_static.o 00:02:37.970 LIB libspdk_virtio.a 00:02:37.970 SO libspdk_accel.so.14.0 00:02:37.970 SO libspdk_virtio.so.6.0 00:02:37.970 SYMLINK libspdk_accel.so 00:02:37.970 SYMLINK libspdk_virtio.so 00:02:38.229 CC lib/bdev/bdev.o 00:02:38.229 CC lib/bdev/bdev_zone.o 00:02:38.229 CC lib/bdev/scsi_nvme.o 00:02:38.229 CC lib/bdev/bdev_rpc.o 00:02:38.229 CC lib/bdev/part.o 00:02:38.229 LIB libspdk_event.a 00:02:38.229 SO libspdk_event.so.12.0 00:02:38.229 LIB libspdk_nvme.a 00:02:38.487 SYMLINK libspdk_event.so 00:02:38.487 SO libspdk_nvme.so.12.0 00:02:38.746 SYMLINK libspdk_nvme.so 00:02:39.681 LIB libspdk_blob.a 00:02:39.681 SO libspdk_blob.so.10.1 00:02:39.938 SYMLINK libspdk_blob.so 00:02:39.938 CC lib/lvol/lvol.o 00:02:39.938 CC lib/blobfs/blobfs.o 00:02:39.938 CC lib/blobfs/tree.o 00:02:40.196 LIB libspdk_bdev.a 00:02:40.196 SO libspdk_bdev.so.14.0 00:02:40.454 SYMLINK libspdk_bdev.so 00:02:40.454 CC lib/nbd/nbd.o 00:02:40.454 CC lib/nbd/nbd_rpc.o 00:02:40.454 CC lib/ftl/ftl_core.o 00:02:40.454 CC lib/ftl/ftl_init.o 00:02:40.454 CC lib/ftl/ftl_layout.o 00:02:40.454 CC lib/scsi/dev.o 00:02:40.454 CC lib/ublk/ublk.o 00:02:40.454 CC lib/nvmf/ctrlr.o 00:02:40.715 LIB libspdk_blobfs.a 00:02:40.715 CC lib/ublk/ublk_rpc.o 00:02:40.715 SO libspdk_blobfs.so.9.0 00:02:40.715 CC lib/scsi/lun.o 00:02:40.715 CC lib/scsi/port.o 00:02:40.715 SYMLINK libspdk_blobfs.so 00:02:40.715 CC lib/scsi/scsi.o 00:02:40.715 CC lib/ftl/ftl_debug.o 00:02:40.715 CC lib/scsi/scsi_bdev.o 00:02:40.715 CC lib/scsi/scsi_pr.o 00:02:40.715 CC lib/scsi/scsi_rpc.o 00:02:40.715 CC lib/scsi/task.o 00:02:40.715 LIB libspdk_lvol.a 00:02:40.973 SO libspdk_lvol.so.9.1 00:02:40.973 CC lib/ftl/ftl_io.o 00:02:40.973 LIB libspdk_nbd.a 00:02:40.973 SYMLINK libspdk_lvol.so 00:02:40.973 SO libspdk_nbd.so.6.0 00:02:40.973 CC lib/ftl/ftl_sb.o 00:02:40.973 CC lib/ftl/ftl_l2p.o 00:02:40.973 CC lib/ftl/ftl_l2p_flat.o 00:02:40.973 SYMLINK libspdk_nbd.so 00:02:40.973 CC lib/ftl/ftl_nv_cache.o 00:02:40.973 CC lib/ftl/ftl_band.o 00:02:40.973 LIB libspdk_ublk.a 00:02:40.973 SO libspdk_ublk.so.2.0 00:02:40.973 CC lib/ftl/ftl_band_ops.o 00:02:40.973 CC lib/ftl/ftl_writer.o 00:02:40.973 SYMLINK libspdk_ublk.so 00:02:40.973 CC lib/ftl/ftl_rq.o 00:02:40.973 CC lib/ftl/ftl_reloc.o 00:02:41.231 CC lib/ftl/ftl_l2p_cache.o 00:02:41.231 CC lib/ftl/ftl_p2l.o 00:02:41.231 LIB libspdk_scsi.a 00:02:41.231 SO libspdk_scsi.so.8.0 00:02:41.231 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.231 SYMLINK libspdk_scsi.so 00:02:41.231 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.231 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.231 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.231 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.488 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.488 CC lib/nvmf/ctrlr_discovery.o 00:02:41.488 CC lib/nvmf/ctrlr_bdev.o 00:02:41.488 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.488 CC lib/iscsi/conn.o 00:02:41.488 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.488 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.488 CC lib/vhost/vhost.o 00:02:41.488 CC lib/vhost/vhost_rpc.o 00:02:41.746 CC lib/vhost/vhost_scsi.o 00:02:41.746 CC lib/nvmf/subsystem.o 00:02:41.746 CC lib/vhost/vhost_blk.o 00:02:41.746 CC lib/vhost/rte_vhost_user.o 00:02:41.746 CC lib/nvmf/nvmf.o 00:02:42.004 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:42.004 CC lib/iscsi/init_grp.o 00:02:42.004 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:42.004 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:42.004 CC lib/nvmf/nvmf_rpc.o 00:02:42.004 CC lib/iscsi/iscsi.o 00:02:42.263 CC lib/iscsi/md5.o 00:02:42.263 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:42.263 CC lib/nvmf/transport.o 00:02:42.263 CC lib/nvmf/tcp.o 00:02:42.521 CC lib/ftl/utils/ftl_conf.o 00:02:42.521 CC lib/ftl/utils/ftl_md.o 00:02:42.521 CC lib/ftl/utils/ftl_mempool.o 00:02:42.521 CC lib/ftl/utils/ftl_bitmap.o 00:02:42.521 CC lib/ftl/utils/ftl_property.o 00:02:42.521 LIB libspdk_vhost.a 00:02:42.521 CC lib/nvmf/rdma.o 00:02:42.521 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:42.521 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:42.521 SO libspdk_vhost.so.7.1 00:02:42.779 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:42.779 SYMLINK libspdk_vhost.so 00:02:42.779 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:42.779 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:42.779 CC lib/iscsi/param.o 00:02:42.779 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:42.779 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:42.779 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:42.779 CC lib/iscsi/portal_grp.o 00:02:42.779 CC lib/iscsi/tgt_node.o 00:02:43.038 CC lib/iscsi/iscsi_subsystem.o 00:02:43.038 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.038 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.038 CC lib/ftl/base/ftl_base_dev.o 00:02:43.038 CC lib/iscsi/iscsi_rpc.o 00:02:43.038 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.038 CC lib/iscsi/task.o 00:02:43.038 CC lib/ftl/ftl_trace.o 00:02:43.296 LIB libspdk_ftl.a 00:02:43.296 LIB libspdk_iscsi.a 00:02:43.554 SO libspdk_iscsi.so.7.0 00:02:43.554 SO libspdk_ftl.so.8.0 00:02:43.554 SYMLINK libspdk_iscsi.so 00:02:43.812 SYMLINK libspdk_ftl.so 00:02:44.378 LIB libspdk_nvmf.a 00:02:44.378 SO libspdk_nvmf.so.17.0 00:02:44.378 SYMLINK libspdk_nvmf.so 00:02:44.635 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.635 CC module/sock/posix/posix.o 00:02:44.635 CC module/accel/error/accel_error.o 00:02:44.636 CC module/accel/dsa/accel_dsa.o 00:02:44.636 CC module/blob/bdev/blob_bdev.o 00:02:44.636 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.636 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:44.636 CC module/accel/iaa/accel_iaa.o 00:02:44.636 CC module/accel/ioat/accel_ioat.o 00:02:44.636 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.894 LIB libspdk_env_dpdk_rpc.a 00:02:44.894 SO libspdk_env_dpdk_rpc.so.5.0 00:02:44.894 LIB libspdk_scheduler_gscheduler.a 00:02:44.894 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.894 SO libspdk_scheduler_gscheduler.so.3.0 00:02:44.894 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:44.894 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.894 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.894 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.894 CC module/accel/error/accel_error_rpc.o 00:02:44.894 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.894 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.894 CC module/accel/dsa/accel_dsa_rpc.o 00:02:44.894 LIB libspdk_scheduler_dynamic.a 00:02:44.894 SO libspdk_scheduler_dynamic.so.3.0 00:02:44.894 LIB libspdk_blob_bdev.a 00:02:44.894 LIB libspdk_accel_ioat.a 00:02:44.894 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.894 LIB libspdk_accel_iaa.a 00:02:44.894 LIB libspdk_accel_dsa.a 00:02:44.894 SO libspdk_blob_bdev.so.10.1 00:02:44.894 LIB libspdk_accel_error.a 00:02:44.894 SO libspdk_accel_ioat.so.5.0 00:02:44.894 SO libspdk_accel_iaa.so.2.0 00:02:44.894 SO libspdk_accel_dsa.so.4.0 00:02:44.894 SO libspdk_accel_error.so.1.0 00:02:45.152 SYMLINK libspdk_accel_ioat.so 00:02:45.152 SYMLINK libspdk_accel_iaa.so 00:02:45.152 SYMLINK libspdk_blob_bdev.so 00:02:45.152 SYMLINK libspdk_accel_dsa.so 00:02:45.152 SYMLINK libspdk_accel_error.so 00:02:45.152 CC module/bdev/delay/vbdev_delay.o 00:02:45.152 CC module/blobfs/bdev/blobfs_bdev.o 00:02:45.152 CC module/bdev/gpt/gpt.o 00:02:45.152 CC module/bdev/error/vbdev_error.o 00:02:45.152 CC module/bdev/lvol/vbdev_lvol.o 00:02:45.152 CC module/bdev/null/bdev_null.o 00:02:45.152 CC module/bdev/malloc/bdev_malloc.o 00:02:45.152 CC module/bdev/nvme/bdev_nvme.o 00:02:45.152 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.443 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:45.443 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.443 LIB libspdk_sock_posix.a 00:02:45.443 SO libspdk_sock_posix.so.5.0 00:02:45.443 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.443 CC module/bdev/null/bdev_null_rpc.o 00:02:45.443 SYMLINK libspdk_sock_posix.so 00:02:45.443 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.443 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:45.443 LIB libspdk_blobfs_bdev.a 00:02:45.443 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:45.443 SO libspdk_blobfs_bdev.so.5.0 00:02:45.443 LIB libspdk_bdev_error.a 00:02:45.443 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.443 SYMLINK libspdk_blobfs_bdev.so 00:02:45.443 CC module/bdev/nvme/nvme_rpc.o 00:02:45.443 SO libspdk_bdev_error.so.5.0 00:02:45.700 LIB libspdk_bdev_null.a 00:02:45.700 LIB libspdk_bdev_gpt.a 00:02:45.700 SO libspdk_bdev_null.so.5.0 00:02:45.700 LIB libspdk_bdev_passthru.a 00:02:45.700 SO libspdk_bdev_gpt.so.5.0 00:02:45.700 SO libspdk_bdev_passthru.so.5.0 00:02:45.700 LIB libspdk_bdev_malloc.a 00:02:45.700 SYMLINK libspdk_bdev_error.so 00:02:45.700 SYMLINK libspdk_bdev_gpt.so 00:02:45.700 SO libspdk_bdev_malloc.so.5.0 00:02:45.700 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.700 SYMLINK libspdk_bdev_null.so 00:02:45.700 CC module/bdev/nvme/vbdev_opal.o 00:02:45.700 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:45.700 LIB libspdk_bdev_delay.a 00:02:45.700 SYMLINK libspdk_bdev_passthru.so 00:02:45.700 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.700 SO libspdk_bdev_delay.so.5.0 00:02:45.700 SYMLINK libspdk_bdev_malloc.so 00:02:45.700 SYMLINK libspdk_bdev_delay.so 00:02:45.701 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:45.701 CC module/bdev/raid/bdev_raid.o 00:02:45.701 CC module/bdev/split/vbdev_split.o 00:02:45.959 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.959 CC module/bdev/xnvme/bdev_xnvme.o 00:02:45.959 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:45.959 CC module/bdev/split/vbdev_split_rpc.o 00:02:45.959 CC module/bdev/raid/bdev_raid_rpc.o 00:02:45.959 LIB libspdk_bdev_lvol.a 00:02:45.959 SO libspdk_bdev_lvol.so.5.0 00:02:45.959 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:45.959 SYMLINK libspdk_bdev_lvol.so 00:02:45.959 LIB libspdk_bdev_split.a 00:02:45.959 SO libspdk_bdev_split.so.5.0 00:02:45.959 CC module/bdev/aio/bdev_aio.o 00:02:45.959 CC module/bdev/aio/bdev_aio_rpc.o 00:02:45.959 CC module/bdev/ftl/bdev_ftl.o 00:02:45.959 LIB libspdk_bdev_xnvme.a 00:02:46.218 SO libspdk_bdev_xnvme.so.2.0 00:02:46.218 SYMLINK libspdk_bdev_split.so 00:02:46.218 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.218 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.218 SYMLINK libspdk_bdev_xnvme.so 00:02:46.218 LIB libspdk_bdev_zone_block.a 00:02:46.218 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.218 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.218 SO libspdk_bdev_zone_block.so.5.0 00:02:46.218 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.218 SYMLINK libspdk_bdev_zone_block.so 00:02:46.218 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.218 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.218 LIB libspdk_bdev_ftl.a 00:02:46.218 CC module/bdev/raid/raid0.o 00:02:46.476 SO libspdk_bdev_ftl.so.5.0 00:02:46.476 LIB libspdk_bdev_aio.a 00:02:46.476 CC module/bdev/raid/raid1.o 00:02:46.476 SO libspdk_bdev_aio.so.5.0 00:02:46.476 SYMLINK libspdk_bdev_ftl.so 00:02:46.476 CC module/bdev/raid/concat.o 00:02:46.476 SYMLINK libspdk_bdev_aio.so 00:02:46.476 LIB libspdk_bdev_iscsi.a 00:02:46.476 SO libspdk_bdev_iscsi.so.5.0 00:02:46.476 SYMLINK libspdk_bdev_iscsi.so 00:02:46.476 LIB libspdk_bdev_raid.a 00:02:46.733 SO libspdk_bdev_raid.so.5.0 00:02:46.733 LIB libspdk_bdev_virtio.a 00:02:46.733 SYMLINK libspdk_bdev_raid.so 00:02:46.733 SO libspdk_bdev_virtio.so.5.0 00:02:46.733 SYMLINK libspdk_bdev_virtio.so 00:02:47.298 LIB libspdk_bdev_nvme.a 00:02:47.298 SO libspdk_bdev_nvme.so.6.0 00:02:47.298 SYMLINK libspdk_bdev_nvme.so 00:02:47.556 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.556 CC module/event/subsystems/vmd/vmd.o 00:02:47.556 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.556 CC module/event/subsystems/sock/sock.o 00:02:47.556 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.556 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.556 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.814 LIB libspdk_event_scheduler.a 00:02:47.814 LIB libspdk_event_sock.a 00:02:47.814 LIB libspdk_event_vhost_blk.a 00:02:47.814 LIB libspdk_event_vmd.a 00:02:47.814 SO libspdk_event_scheduler.so.3.0 00:02:47.815 LIB libspdk_event_iobuf.a 00:02:47.815 SO libspdk_event_vhost_blk.so.2.0 00:02:47.815 SO libspdk_event_vmd.so.5.0 00:02:47.815 SO libspdk_event_sock.so.4.0 00:02:47.815 SO libspdk_event_iobuf.so.2.0 00:02:47.815 SYMLINK libspdk_event_scheduler.so 00:02:47.815 SYMLINK libspdk_event_vhost_blk.so 00:02:47.815 SYMLINK libspdk_event_vmd.so 00:02:47.815 SYMLINK libspdk_event_sock.so 00:02:47.815 SYMLINK libspdk_event_iobuf.so 00:02:47.815 CC module/event/subsystems/accel/accel.o 00:02:48.074 LIB libspdk_event_accel.a 00:02:48.074 SO libspdk_event_accel.so.5.0 00:02:48.074 SYMLINK libspdk_event_accel.so 00:02:48.334 CC module/event/subsystems/bdev/bdev.o 00:02:48.334 LIB libspdk_event_bdev.a 00:02:48.334 SO libspdk_event_bdev.so.5.0 00:02:48.592 SYMLINK libspdk_event_bdev.so 00:02:48.592 CC module/event/subsystems/scsi/scsi.o 00:02:48.592 CC module/event/subsystems/ublk/ublk.o 00:02:48.592 CC module/event/subsystems/nbd/nbd.o 00:02:48.592 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:48.592 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:48.592 LIB libspdk_event_ublk.a 00:02:48.592 LIB libspdk_event_nbd.a 00:02:48.592 LIB libspdk_event_scsi.a 00:02:48.592 SO libspdk_event_ublk.so.2.0 00:02:48.850 SO libspdk_event_nbd.so.5.0 00:02:48.850 SO libspdk_event_scsi.so.5.0 00:02:48.850 SYMLINK libspdk_event_ublk.so 00:02:48.850 SYMLINK libspdk_event_nbd.so 00:02:48.850 SYMLINK libspdk_event_scsi.so 00:02:48.850 LIB libspdk_event_nvmf.a 00:02:48.850 SO libspdk_event_nvmf.so.5.0 00:02:48.850 SYMLINK libspdk_event_nvmf.so 00:02:48.850 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.850 CC module/event/subsystems/iscsi/iscsi.o 00:02:49.109 LIB libspdk_event_vhost_scsi.a 00:02:49.109 SO libspdk_event_vhost_scsi.so.2.0 00:02:49.109 LIB libspdk_event_iscsi.a 00:02:49.109 SO libspdk_event_iscsi.so.5.0 00:02:49.109 SYMLINK libspdk_event_vhost_scsi.so 00:02:49.109 SYMLINK libspdk_event_iscsi.so 00:02:49.109 SO libspdk.so.5.0 00:02:49.109 SYMLINK libspdk.so 00:02:49.366 CC app/trace_record/trace_record.o 00:02:49.366 CXX app/trace/trace.o 00:02:49.366 CC app/nvmf_tgt/nvmf_main.o 00:02:49.366 CC examples/nvme/hello_world/hello_world.o 00:02:49.367 CC examples/ioat/perf/perf.o 00:02:49.367 CC examples/accel/perf/accel_perf.o 00:02:49.367 CC examples/sock/hello_world/hello_sock.o 00:02:49.367 CC examples/bdev/hello_world/hello_bdev.o 00:02:49.367 CC test/accel/dif/dif.o 00:02:49.367 CC examples/blob/hello_world/hello_blob.o 00:02:49.367 LINK nvmf_tgt 00:02:49.624 LINK spdk_trace_record 00:02:49.624 LINK hello_world 00:02:49.624 LINK hello_bdev 00:02:49.624 LINK ioat_perf 00:02:49.624 LINK hello_sock 00:02:49.624 LINK hello_blob 00:02:49.624 LINK spdk_trace 00:02:49.624 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.624 CC examples/blob/cli/blobcli.o 00:02:49.624 CC examples/nvme/reconnect/reconnect.o 00:02:49.624 CC examples/ioat/verify/verify.o 00:02:49.624 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.882 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.882 LINK dif 00:02:49.882 LINK accel_perf 00:02:49.882 LINK iscsi_tgt 00:02:49.882 CC examples/vmd/lsvmd/lsvmd.o 00:02:49.882 CC examples/nvmf/nvmf/nvmf.o 00:02:49.882 LINK verify 00:02:49.882 LINK lsvmd 00:02:49.882 LINK reconnect 00:02:50.141 CC examples/util/zipf/zipf.o 00:02:50.141 CC test/app/bdev_svc/bdev_svc.o 00:02:50.141 CC app/spdk_tgt/spdk_tgt.o 00:02:50.141 LINK nvme_manage 00:02:50.141 CC examples/vmd/led/led.o 00:02:50.141 CC test/app/histogram_perf/histogram_perf.o 00:02:50.141 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.141 LINK zipf 00:02:50.141 LINK nvmf 00:02:50.141 LINK blobcli 00:02:50.141 LINK bdev_svc 00:02:50.141 LINK led 00:02:50.141 LINK spdk_tgt 00:02:50.141 LINK histogram_perf 00:02:50.399 CC examples/nvme/arbitration/arbitration.o 00:02:50.399 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:50.399 CC examples/nvme/hotplug/hotplug.o 00:02:50.399 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:50.399 CC examples/nvme/abort/abort.o 00:02:50.399 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:50.399 CC app/spdk_lspci/spdk_lspci.o 00:02:50.399 LINK bdevperf 00:02:50.399 LINK hotplug 00:02:50.399 CC examples/thread/thread/thread_ex.o 00:02:50.399 LINK nvme_fuzz 00:02:50.657 LINK cmb_copy 00:02:50.657 LINK pmr_persistence 00:02:50.657 LINK spdk_lspci 00:02:50.657 LINK arbitration 00:02:50.657 CC app/spdk_nvme_perf/perf.o 00:02:50.657 CC examples/idxd/perf/perf.o 00:02:50.657 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.657 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:50.657 LINK thread 00:02:50.657 CC app/spdk_nvme_identify/identify.o 00:02:50.657 LINK abort 00:02:50.657 CC test/bdev/bdevio/bdevio.o 00:02:50.657 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.916 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:50.916 LINK interrupt_tgt 00:02:50.916 CC app/spdk_top/spdk_top.o 00:02:50.916 LINK spdk_nvme_discover 00:02:50.916 CC app/vhost/vhost.o 00:02:50.916 CC app/spdk_dd/spdk_dd.o 00:02:51.174 LINK idxd_perf 00:02:51.174 LINK bdevio 00:02:51.174 CC test/app/jsoncat/jsoncat.o 00:02:51.174 LINK vhost 00:02:51.174 CC test/app/stub/stub.o 00:02:51.174 LINK vhost_fuzz 00:02:51.174 LINK jsoncat 00:02:51.174 CC test/blobfs/mkfs/mkfs.o 00:02:51.433 LINK spdk_dd 00:02:51.433 CC app/fio/nvme/fio_plugin.o 00:02:51.433 LINK stub 00:02:51.433 LINK spdk_nvme_perf 00:02:51.433 TEST_HEADER include/spdk/accel.h 00:02:51.433 TEST_HEADER include/spdk/accel_module.h 00:02:51.433 TEST_HEADER include/spdk/assert.h 00:02:51.433 TEST_HEADER include/spdk/barrier.h 00:02:51.433 TEST_HEADER include/spdk/base64.h 00:02:51.433 TEST_HEADER include/spdk/bdev.h 00:02:51.433 TEST_HEADER include/spdk/bdev_module.h 00:02:51.433 TEST_HEADER include/spdk/bdev_zone.h 00:02:51.433 TEST_HEADER include/spdk/bit_array.h 00:02:51.433 TEST_HEADER include/spdk/bit_pool.h 00:02:51.433 TEST_HEADER include/spdk/blob_bdev.h 00:02:51.433 CC app/fio/bdev/fio_plugin.o 00:02:51.433 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:51.433 TEST_HEADER include/spdk/blobfs.h 00:02:51.433 TEST_HEADER include/spdk/blob.h 00:02:51.433 TEST_HEADER include/spdk/conf.h 00:02:51.433 TEST_HEADER include/spdk/config.h 00:02:51.433 TEST_HEADER include/spdk/cpuset.h 00:02:51.433 TEST_HEADER include/spdk/crc16.h 00:02:51.433 TEST_HEADER include/spdk/crc32.h 00:02:51.433 TEST_HEADER include/spdk/crc64.h 00:02:51.433 TEST_HEADER include/spdk/dif.h 00:02:51.433 TEST_HEADER include/spdk/dma.h 00:02:51.433 TEST_HEADER include/spdk/endian.h 00:02:51.433 TEST_HEADER include/spdk/env_dpdk.h 00:02:51.433 LINK mkfs 00:02:51.433 TEST_HEADER include/spdk/env.h 00:02:51.433 TEST_HEADER include/spdk/event.h 00:02:51.433 TEST_HEADER include/spdk/fd_group.h 00:02:51.433 TEST_HEADER include/spdk/fd.h 00:02:51.433 TEST_HEADER include/spdk/file.h 00:02:51.433 TEST_HEADER include/spdk/ftl.h 00:02:51.433 TEST_HEADER include/spdk/gpt_spec.h 00:02:51.433 TEST_HEADER include/spdk/hexlify.h 00:02:51.433 TEST_HEADER include/spdk/histogram_data.h 00:02:51.433 TEST_HEADER include/spdk/idxd.h 00:02:51.433 TEST_HEADER include/spdk/idxd_spec.h 00:02:51.433 TEST_HEADER include/spdk/init.h 00:02:51.433 TEST_HEADER include/spdk/ioat.h 00:02:51.433 TEST_HEADER include/spdk/ioat_spec.h 00:02:51.433 TEST_HEADER include/spdk/iscsi_spec.h 00:02:51.433 TEST_HEADER include/spdk/json.h 00:02:51.433 TEST_HEADER include/spdk/jsonrpc.h 00:02:51.433 TEST_HEADER include/spdk/likely.h 00:02:51.433 TEST_HEADER include/spdk/log.h 00:02:51.433 TEST_HEADER include/spdk/lvol.h 00:02:51.433 TEST_HEADER include/spdk/memory.h 00:02:51.433 TEST_HEADER include/spdk/mmio.h 00:02:51.433 TEST_HEADER include/spdk/nbd.h 00:02:51.433 TEST_HEADER include/spdk/notify.h 00:02:51.433 TEST_HEADER include/spdk/nvme.h 00:02:51.433 TEST_HEADER include/spdk/nvme_intel.h 00:02:51.433 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:51.433 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:51.433 TEST_HEADER include/spdk/nvme_spec.h 00:02:51.433 TEST_HEADER include/spdk/nvme_zns.h 00:02:51.433 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:51.433 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:51.433 TEST_HEADER include/spdk/nvmf.h 00:02:51.433 TEST_HEADER include/spdk/nvmf_spec.h 00:02:51.433 TEST_HEADER include/spdk/nvmf_transport.h 00:02:51.433 TEST_HEADER include/spdk/opal.h 00:02:51.433 TEST_HEADER include/spdk/opal_spec.h 00:02:51.433 TEST_HEADER include/spdk/pci_ids.h 00:02:51.433 TEST_HEADER include/spdk/pipe.h 00:02:51.433 TEST_HEADER include/spdk/queue.h 00:02:51.433 TEST_HEADER include/spdk/reduce.h 00:02:51.433 TEST_HEADER include/spdk/rpc.h 00:02:51.433 TEST_HEADER include/spdk/scheduler.h 00:02:51.433 TEST_HEADER include/spdk/scsi.h 00:02:51.433 TEST_HEADER include/spdk/scsi_spec.h 00:02:51.433 TEST_HEADER include/spdk/sock.h 00:02:51.433 TEST_HEADER include/spdk/stdinc.h 00:02:51.433 TEST_HEADER include/spdk/string.h 00:02:51.433 TEST_HEADER include/spdk/thread.h 00:02:51.433 TEST_HEADER include/spdk/trace.h 00:02:51.433 TEST_HEADER include/spdk/trace_parser.h 00:02:51.433 TEST_HEADER include/spdk/tree.h 00:02:51.433 TEST_HEADER include/spdk/ublk.h 00:02:51.433 TEST_HEADER include/spdk/util.h 00:02:51.433 TEST_HEADER include/spdk/uuid.h 00:02:51.433 TEST_HEADER include/spdk/version.h 00:02:51.433 LINK spdk_nvme_identify 00:02:51.433 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:51.433 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:51.433 TEST_HEADER include/spdk/vhost.h 00:02:51.433 TEST_HEADER include/spdk/vmd.h 00:02:51.433 TEST_HEADER include/spdk/xor.h 00:02:51.433 TEST_HEADER include/spdk/zipf.h 00:02:51.433 CXX test/cpp_headers/accel.o 00:02:51.433 CC test/dma/test_dma/test_dma.o 00:02:51.693 CC test/event/event_perf/event_perf.o 00:02:51.693 CC test/env/mem_callbacks/mem_callbacks.o 00:02:51.693 LINK iscsi_fuzz 00:02:51.693 CXX test/cpp_headers/accel_module.o 00:02:51.693 LINK event_perf 00:02:51.693 CC test/lvol/esnap/esnap.o 00:02:51.693 CC test/nvme/aer/aer.o 00:02:51.693 LINK spdk_top 00:02:51.693 CXX test/cpp_headers/assert.o 00:02:51.693 LINK spdk_bdev 00:02:51.950 CC test/nvme/reset/reset.o 00:02:51.950 CXX test/cpp_headers/barrier.o 00:02:51.950 CC test/event/reactor/reactor.o 00:02:51.950 LINK spdk_nvme 00:02:51.950 LINK test_dma 00:02:51.951 CC test/nvme/sgl/sgl.o 00:02:51.951 CC test/nvme/e2edp/nvme_dp.o 00:02:51.951 CXX test/cpp_headers/base64.o 00:02:51.951 LINK reactor 00:02:51.951 CC test/nvme/overhead/overhead.o 00:02:51.951 LINK aer 00:02:51.951 CXX test/cpp_headers/bdev.o 00:02:51.951 LINK reset 00:02:52.208 LINK mem_callbacks 00:02:52.208 CC test/event/reactor_perf/reactor_perf.o 00:02:52.208 LINK nvme_dp 00:02:52.208 CC test/nvme/err_injection/err_injection.o 00:02:52.208 CC test/rpc_client/rpc_client_test.o 00:02:52.208 CXX test/cpp_headers/bdev_module.o 00:02:52.208 LINK sgl 00:02:52.208 LINK overhead 00:02:52.209 LINK reactor_perf 00:02:52.209 CC test/thread/poller_perf/poller_perf.o 00:02:52.209 CC test/env/vtophys/vtophys.o 00:02:52.209 CC test/nvme/startup/startup.o 00:02:52.209 LINK err_injection 00:02:52.209 CXX test/cpp_headers/bdev_zone.o 00:02:52.209 LINK rpc_client_test 00:02:52.209 CXX test/cpp_headers/bit_array.o 00:02:52.209 LINK vtophys 00:02:52.466 CC test/nvme/reserve/reserve.o 00:02:52.466 LINK poller_perf 00:02:52.466 CXX test/cpp_headers/bit_pool.o 00:02:52.466 CC test/event/app_repeat/app_repeat.o 00:02:52.466 CXX test/cpp_headers/blob_bdev.o 00:02:52.466 LINK startup 00:02:52.466 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:52.466 CC test/event/scheduler/scheduler.o 00:02:52.466 CXX test/cpp_headers/blobfs_bdev.o 00:02:52.466 CC test/env/memory/memory_ut.o 00:02:52.466 LINK app_repeat 00:02:52.466 CXX test/cpp_headers/blobfs.o 00:02:52.466 CXX test/cpp_headers/blob.o 00:02:52.466 LINK reserve 00:02:52.466 LINK env_dpdk_post_init 00:02:52.466 CC test/env/pci/pci_ut.o 00:02:52.724 LINK scheduler 00:02:52.724 CXX test/cpp_headers/conf.o 00:02:52.724 CC test/nvme/simple_copy/simple_copy.o 00:02:52.724 CC test/nvme/connect_stress/connect_stress.o 00:02:52.724 CC test/nvme/boot_partition/boot_partition.o 00:02:52.724 CC test/nvme/compliance/nvme_compliance.o 00:02:52.724 CXX test/cpp_headers/config.o 00:02:52.724 CXX test/cpp_headers/cpuset.o 00:02:52.724 CC test/nvme/fused_ordering/fused_ordering.o 00:02:52.724 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:52.724 LINK boot_partition 00:02:52.724 CXX test/cpp_headers/crc16.o 00:02:52.724 LINK connect_stress 00:02:52.982 LINK simple_copy 00:02:52.982 LINK fused_ordering 00:02:52.982 LINK doorbell_aers 00:02:52.982 CXX test/cpp_headers/crc32.o 00:02:52.982 CC test/nvme/fdp/fdp.o 00:02:52.982 LINK pci_ut 00:02:52.982 CXX test/cpp_headers/crc64.o 00:02:52.982 CC test/nvme/cuse/cuse.o 00:02:52.982 LINK nvme_compliance 00:02:52.982 CXX test/cpp_headers/dif.o 00:02:52.982 CXX test/cpp_headers/dma.o 00:02:52.982 CXX test/cpp_headers/endian.o 00:02:52.982 LINK memory_ut 00:02:52.982 CXX test/cpp_headers/env_dpdk.o 00:02:52.982 CXX test/cpp_headers/env.o 00:02:52.982 CXX test/cpp_headers/event.o 00:02:52.982 CXX test/cpp_headers/fd_group.o 00:02:53.239 CXX test/cpp_headers/fd.o 00:02:53.239 CXX test/cpp_headers/file.o 00:02:53.239 LINK fdp 00:02:53.239 CXX test/cpp_headers/ftl.o 00:02:53.239 CXX test/cpp_headers/gpt_spec.o 00:02:53.239 CXX test/cpp_headers/hexlify.o 00:02:53.239 CXX test/cpp_headers/histogram_data.o 00:02:53.239 CXX test/cpp_headers/idxd.o 00:02:53.239 CXX test/cpp_headers/idxd_spec.o 00:02:53.239 CXX test/cpp_headers/init.o 00:02:53.239 CXX test/cpp_headers/ioat.o 00:02:53.239 CXX test/cpp_headers/ioat_spec.o 00:02:53.239 CXX test/cpp_headers/iscsi_spec.o 00:02:53.239 CXX test/cpp_headers/json.o 00:02:53.239 CXX test/cpp_headers/jsonrpc.o 00:02:53.239 CXX test/cpp_headers/likely.o 00:02:53.239 CXX test/cpp_headers/log.o 00:02:53.497 CXX test/cpp_headers/lvol.o 00:02:53.497 CXX test/cpp_headers/memory.o 00:02:53.497 CXX test/cpp_headers/mmio.o 00:02:53.497 CXX test/cpp_headers/nbd.o 00:02:53.497 CXX test/cpp_headers/notify.o 00:02:53.497 CXX test/cpp_headers/nvme.o 00:02:53.497 CXX test/cpp_headers/nvme_intel.o 00:02:53.497 CXX test/cpp_headers/nvme_ocssd.o 00:02:53.497 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:53.497 CXX test/cpp_headers/nvme_spec.o 00:02:53.497 CXX test/cpp_headers/nvme_zns.o 00:02:53.497 CXX test/cpp_headers/nvmf_cmd.o 00:02:53.497 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:53.497 CXX test/cpp_headers/nvmf.o 00:02:53.497 CXX test/cpp_headers/nvmf_spec.o 00:02:53.497 CXX test/cpp_headers/nvmf_transport.o 00:02:53.497 CXX test/cpp_headers/opal.o 00:02:53.497 CXX test/cpp_headers/opal_spec.o 00:02:53.756 CXX test/cpp_headers/pci_ids.o 00:02:53.756 CXX test/cpp_headers/pipe.o 00:02:53.756 CXX test/cpp_headers/queue.o 00:02:53.756 CXX test/cpp_headers/reduce.o 00:02:53.756 CXX test/cpp_headers/rpc.o 00:02:53.756 CXX test/cpp_headers/scheduler.o 00:02:53.756 CXX test/cpp_headers/scsi.o 00:02:53.756 CXX test/cpp_headers/scsi_spec.o 00:02:53.756 CXX test/cpp_headers/sock.o 00:02:53.756 CXX test/cpp_headers/stdinc.o 00:02:53.756 CXX test/cpp_headers/string.o 00:02:53.756 CXX test/cpp_headers/thread.o 00:02:53.756 CXX test/cpp_headers/trace.o 00:02:53.756 CXX test/cpp_headers/trace_parser.o 00:02:53.756 CXX test/cpp_headers/tree.o 00:02:53.756 CXX test/cpp_headers/ublk.o 00:02:53.756 CXX test/cpp_headers/util.o 00:02:53.756 CXX test/cpp_headers/uuid.o 00:02:53.756 CXX test/cpp_headers/version.o 00:02:54.015 CXX test/cpp_headers/vfio_user_pci.o 00:02:54.015 LINK cuse 00:02:54.015 CXX test/cpp_headers/vfio_user_spec.o 00:02:54.015 CXX test/cpp_headers/vhost.o 00:02:54.015 CXX test/cpp_headers/vmd.o 00:02:54.015 CXX test/cpp_headers/xor.o 00:02:54.015 CXX test/cpp_headers/zipf.o 00:02:55.924 LINK esnap 00:02:55.924 00:02:55.924 real 0m47.386s 00:02:55.924 user 4m38.140s 00:02:55.924 sys 0m56.938s 00:02:55.924 13:07:10 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:55.924 13:07:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.924 ************************************ 00:02:55.924 END TEST make 00:02:55.924 ************************************ 00:02:55.924 13:07:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:55.924 13:07:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:55.924 13:07:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:55.924 13:07:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:55.924 13:07:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:55.924 13:07:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:55.924 13:07:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:55.924 13:07:10 -- scripts/common.sh@335 -- # IFS=.-: 00:02:55.924 13:07:10 -- scripts/common.sh@335 -- # read -ra ver1 00:02:55.924 13:07:10 -- scripts/common.sh@336 -- # IFS=.-: 00:02:55.924 13:07:10 -- scripts/common.sh@336 -- # read -ra ver2 00:02:55.924 13:07:10 -- scripts/common.sh@337 -- # local 'op=<' 00:02:55.924 13:07:10 -- scripts/common.sh@339 -- # ver1_l=2 00:02:55.924 13:07:10 -- scripts/common.sh@340 -- # ver2_l=1 00:02:55.924 13:07:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:55.924 13:07:10 -- scripts/common.sh@343 -- # case "$op" in 00:02:55.924 13:07:10 -- scripts/common.sh@344 -- # : 1 00:02:55.924 13:07:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:55.924 13:07:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.924 13:07:10 -- scripts/common.sh@364 -- # decimal 1 00:02:55.924 13:07:10 -- scripts/common.sh@352 -- # local d=1 00:02:55.924 13:07:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:55.924 13:07:10 -- scripts/common.sh@354 -- # echo 1 00:02:55.924 13:07:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:55.924 13:07:10 -- scripts/common.sh@365 -- # decimal 2 00:02:55.924 13:07:10 -- scripts/common.sh@352 -- # local d=2 00:02:55.924 13:07:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:55.924 13:07:10 -- scripts/common.sh@354 -- # echo 2 00:02:55.924 13:07:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:55.924 13:07:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:55.924 13:07:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:55.924 13:07:10 -- scripts/common.sh@367 -- # return 0 00:02:55.924 13:07:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:55.924 13:07:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:55.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.924 --rc genhtml_branch_coverage=1 00:02:55.924 --rc genhtml_function_coverage=1 00:02:55.924 --rc genhtml_legend=1 00:02:55.924 --rc geninfo_all_blocks=1 00:02:55.924 --rc geninfo_unexecuted_blocks=1 00:02:55.924 00:02:55.924 ' 00:02:55.924 13:07:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:55.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.924 --rc genhtml_branch_coverage=1 00:02:55.924 --rc genhtml_function_coverage=1 00:02:55.924 --rc genhtml_legend=1 00:02:55.924 --rc geninfo_all_blocks=1 00:02:55.924 --rc geninfo_unexecuted_blocks=1 00:02:55.924 00:02:55.924 ' 00:02:55.924 13:07:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:55.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.924 --rc genhtml_branch_coverage=1 00:02:55.924 --rc genhtml_function_coverage=1 00:02:55.924 --rc genhtml_legend=1 00:02:55.924 --rc geninfo_all_blocks=1 00:02:55.924 --rc geninfo_unexecuted_blocks=1 00:02:55.924 00:02:55.924 ' 00:02:55.924 13:07:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:55.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.924 --rc genhtml_branch_coverage=1 00:02:55.924 --rc genhtml_function_coverage=1 00:02:55.924 --rc genhtml_legend=1 00:02:55.924 --rc geninfo_all_blocks=1 00:02:55.924 --rc geninfo_unexecuted_blocks=1 00:02:55.924 00:02:55.924 ' 00:02:55.924 13:07:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:55.924 13:07:10 -- nvmf/common.sh@7 -- # uname -s 00:02:55.924 13:07:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.924 13:07:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.924 13:07:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.924 13:07:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:56.185 13:07:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:56.185 13:07:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:56.185 13:07:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:56.185 13:07:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:56.185 13:07:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:56.185 13:07:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:56.185 13:07:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee652fb3-397f-4785-b30f-e769daa7efa1 00:02:56.185 13:07:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=ee652fb3-397f-4785-b30f-e769daa7efa1 00:02:56.185 13:07:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:56.185 13:07:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:56.185 13:07:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:56.185 13:07:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.185 13:07:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:56.185 13:07:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.185 13:07:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.185 13:07:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.185 13:07:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.185 13:07:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.185 13:07:10 -- paths/export.sh@5 -- # export PATH 00:02:56.185 13:07:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.185 13:07:10 -- nvmf/common.sh@46 -- # : 0 00:02:56.185 13:07:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:56.185 13:07:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:56.185 13:07:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:56.185 13:07:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:56.185 13:07:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:56.185 13:07:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:56.185 13:07:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:56.185 13:07:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:56.185 13:07:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:56.185 13:07:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:56.185 13:07:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:56.185 13:07:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:56.185 13:07:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:56.185 13:07:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:56.185 13:07:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:56.185 13:07:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:56.185 13:07:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:56.185 13:07:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:56.185 13:07:10 -- spdk/autotest.sh@48 -- # udevadm_pid=48146 00:02:56.185 13:07:10 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:56.185 13:07:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:56.185 13:07:10 -- spdk/autotest.sh@54 -- # echo 48176 00:02:56.185 13:07:10 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:56.185 13:07:10 -- spdk/autotest.sh@56 -- # echo 48180 00:02:56.185 13:07:10 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:56.185 13:07:10 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:56.185 13:07:10 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:56.185 13:07:10 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:56.185 13:07:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:56.185 13:07:10 -- common/autotest_common.sh@10 -- # set +x 00:02:56.185 13:07:10 -- spdk/autotest.sh@70 -- # create_test_list 00:02:56.185 13:07:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:56.185 13:07:10 -- common/autotest_common.sh@10 -- # set +x 00:02:56.185 13:07:10 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:56.185 13:07:10 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:56.185 13:07:10 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:56.185 13:07:10 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.185 13:07:10 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:56.185 13:07:10 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:56.185 13:07:10 -- common/autotest_common.sh@1450 -- # uname 00:02:56.185 13:07:10 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:56.185 13:07:10 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:56.185 13:07:10 -- common/autotest_common.sh@1470 -- # uname 00:02:56.185 13:07:10 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:56.185 13:07:10 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:56.185 13:07:10 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:56.185 lcov: LCOV version 1.15 00:02:56.185 13:07:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:04.315 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:04.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:04.315 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:04.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:04.315 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:04.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:26.298 13:07:37 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:26.298 13:07:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:26.298 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:03:26.298 13:07:37 -- spdk/autotest.sh@89 -- # rm -f 00:03:26.298 13:07:37 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:26.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.298 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:26.298 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:26.298 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:26.298 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:26.298 13:07:38 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:26.298 13:07:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:26.298 13:07:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:26.298 13:07:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:26.298 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:26.298 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:26.298 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:26.298 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:26.298 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.298 13:07:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.298 13:07:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:26.299 13:07:38 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:26.299 13:07:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:26.299 13:07:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # grep -v p 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.299 13:07:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:26.299 13:07:38 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:26.299 13:07:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.299 No valid GPT data, bailing 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # pt= 00:03:26.299 13:07:38 -- scripts/common.sh@394 -- # return 1 00:03:26.299 13:07:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.299 1+0 records in 00:03:26.299 1+0 records out 00:03:26.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296676 s, 35.3 MB/s 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.299 13:07:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:26.299 13:07:38 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:26.299 13:07:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:26.299 No valid GPT data, bailing 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # pt= 00:03:26.299 13:07:38 -- scripts/common.sh@394 -- # return 1 00:03:26.299 13:07:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:26.299 1+0 records in 00:03:26.299 1+0 records out 00:03:26.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618137 s, 170 MB/s 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.299 13:07:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:26.299 13:07:38 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:26.299 13:07:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:26.299 No valid GPT data, bailing 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # pt= 00:03:26.299 13:07:38 -- scripts/common.sh@394 -- # return 1 00:03:26.299 13:07:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:26.299 1+0 records in 00:03:26.299 1+0 records out 00:03:26.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605286 s, 173 MB/s 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.299 13:07:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:26.299 13:07:38 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:26.299 13:07:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:26.299 No valid GPT data, bailing 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # pt= 00:03:26.299 13:07:38 -- scripts/common.sh@394 -- # return 1 00:03:26.299 13:07:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:26.299 1+0 records in 00:03:26.299 1+0 records out 00:03:26.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566359 s, 185 MB/s 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.299 13:07:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:26.299 13:07:38 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:26.299 13:07:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:26.299 No valid GPT data, bailing 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # pt= 00:03:26.299 13:07:38 -- scripts/common.sh@394 -- # return 1 00:03:26.299 13:07:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:26.299 1+0 records in 00:03:26.299 1+0 records out 00:03:26.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569506 s, 184 MB/s 00:03:26.299 13:07:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.299 13:07:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:26.299 13:07:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:26.299 13:07:38 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:26.299 13:07:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:26.299 No valid GPT data, bailing 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:26.299 13:07:38 -- scripts/common.sh@393 -- # pt= 00:03:26.299 13:07:38 -- scripts/common.sh@394 -- # return 1 00:03:26.299 13:07:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:26.299 1+0 records in 00:03:26.299 1+0 records out 00:03:26.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587379 s, 179 MB/s 00:03:26.299 13:07:38 -- spdk/autotest.sh@116 -- # sync 00:03:26.299 13:07:39 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:26.299 13:07:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:26.299 13:07:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:26.561 13:07:40 -- spdk/autotest.sh@122 -- # uname -s 00:03:26.561 13:07:40 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:26.561 13:07:40 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:26.561 13:07:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.561 13:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.561 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:03:26.561 ************************************ 00:03:26.561 START TEST setup.sh 00:03:26.561 ************************************ 00:03:26.561 13:07:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:26.561 * Looking for test storage... 00:03:26.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:26.561 13:07:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:26.561 13:07:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:26.561 13:07:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:26.561 13:07:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:26.561 13:07:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:26.561 13:07:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:26.561 13:07:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:26.561 13:07:41 -- scripts/common.sh@335 -- # IFS=.-: 00:03:26.561 13:07:41 -- scripts/common.sh@335 -- # read -ra ver1 00:03:26.561 13:07:41 -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.561 13:07:41 -- scripts/common.sh@336 -- # read -ra ver2 00:03:26.561 13:07:41 -- scripts/common.sh@337 -- # local 'op=<' 00:03:26.561 13:07:41 -- scripts/common.sh@339 -- # ver1_l=2 00:03:26.561 13:07:41 -- scripts/common.sh@340 -- # ver2_l=1 00:03:26.561 13:07:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:26.561 13:07:41 -- scripts/common.sh@343 -- # case "$op" in 00:03:26.561 13:07:41 -- scripts/common.sh@344 -- # : 1 00:03:26.561 13:07:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:26.561 13:07:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.561 13:07:41 -- scripts/common.sh@364 -- # decimal 1 00:03:26.561 13:07:41 -- scripts/common.sh@352 -- # local d=1 00:03:26.561 13:07:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.561 13:07:41 -- scripts/common.sh@354 -- # echo 1 00:03:26.561 13:07:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:26.561 13:07:41 -- scripts/common.sh@365 -- # decimal 2 00:03:26.561 13:07:41 -- scripts/common.sh@352 -- # local d=2 00:03:26.561 13:07:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.561 13:07:41 -- scripts/common.sh@354 -- # echo 2 00:03:26.561 13:07:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:26.561 13:07:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:26.561 13:07:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:26.561 13:07:41 -- scripts/common.sh@367 -- # return 0 00:03:26.561 13:07:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.561 13:07:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.561 --rc genhtml_branch_coverage=1 00:03:26.561 --rc genhtml_function_coverage=1 00:03:26.561 --rc genhtml_legend=1 00:03:26.561 --rc geninfo_all_blocks=1 00:03:26.561 --rc geninfo_unexecuted_blocks=1 00:03:26.561 00:03:26.561 ' 00:03:26.561 13:07:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.561 --rc genhtml_branch_coverage=1 00:03:26.561 --rc genhtml_function_coverage=1 00:03:26.561 --rc genhtml_legend=1 00:03:26.561 --rc geninfo_all_blocks=1 00:03:26.561 --rc geninfo_unexecuted_blocks=1 00:03:26.561 00:03:26.561 ' 00:03:26.561 13:07:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.561 --rc genhtml_branch_coverage=1 00:03:26.561 --rc genhtml_function_coverage=1 00:03:26.561 --rc genhtml_legend=1 00:03:26.561 --rc geninfo_all_blocks=1 00:03:26.561 --rc geninfo_unexecuted_blocks=1 00:03:26.561 00:03:26.561 ' 00:03:26.561 13:07:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.561 --rc genhtml_branch_coverage=1 00:03:26.561 --rc genhtml_function_coverage=1 00:03:26.561 --rc genhtml_legend=1 00:03:26.561 --rc geninfo_all_blocks=1 00:03:26.561 --rc geninfo_unexecuted_blocks=1 00:03:26.561 00:03:26.561 ' 00:03:26.561 13:07:41 -- setup/test-setup.sh@10 -- # uname -s 00:03:26.561 13:07:41 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:26.561 13:07:41 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:26.561 13:07:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.561 13:07:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.561 13:07:41 -- common/autotest_common.sh@10 -- # set +x 00:03:26.561 ************************************ 00:03:26.561 START TEST acl 00:03:26.561 ************************************ 00:03:26.561 13:07:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:26.823 * Looking for test storage... 00:03:26.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:26.823 13:07:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:26.823 13:07:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:26.823 13:07:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:26.823 13:07:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:26.823 13:07:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:26.823 13:07:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:26.823 13:07:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:26.823 13:07:41 -- scripts/common.sh@335 -- # IFS=.-: 00:03:26.823 13:07:41 -- scripts/common.sh@335 -- # read -ra ver1 00:03:26.823 13:07:41 -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.823 13:07:41 -- scripts/common.sh@336 -- # read -ra ver2 00:03:26.823 13:07:41 -- scripts/common.sh@337 -- # local 'op=<' 00:03:26.823 13:07:41 -- scripts/common.sh@339 -- # ver1_l=2 00:03:26.823 13:07:41 -- scripts/common.sh@340 -- # ver2_l=1 00:03:26.823 13:07:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:26.823 13:07:41 -- scripts/common.sh@343 -- # case "$op" in 00:03:26.823 13:07:41 -- scripts/common.sh@344 -- # : 1 00:03:26.823 13:07:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:26.823 13:07:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.823 13:07:41 -- scripts/common.sh@364 -- # decimal 1 00:03:26.823 13:07:41 -- scripts/common.sh@352 -- # local d=1 00:03:26.823 13:07:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.823 13:07:41 -- scripts/common.sh@354 -- # echo 1 00:03:26.823 13:07:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:26.823 13:07:41 -- scripts/common.sh@365 -- # decimal 2 00:03:26.823 13:07:41 -- scripts/common.sh@352 -- # local d=2 00:03:26.823 13:07:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.823 13:07:41 -- scripts/common.sh@354 -- # echo 2 00:03:26.823 13:07:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:26.823 13:07:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:26.823 13:07:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:26.823 13:07:41 -- scripts/common.sh@367 -- # return 0 00:03:26.823 13:07:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.823 13:07:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.823 --rc genhtml_branch_coverage=1 00:03:26.824 --rc genhtml_function_coverage=1 00:03:26.824 --rc genhtml_legend=1 00:03:26.824 --rc geninfo_all_blocks=1 00:03:26.824 --rc geninfo_unexecuted_blocks=1 00:03:26.824 00:03:26.824 ' 00:03:26.824 13:07:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.824 --rc genhtml_branch_coverage=1 00:03:26.824 --rc genhtml_function_coverage=1 00:03:26.824 --rc genhtml_legend=1 00:03:26.824 --rc geninfo_all_blocks=1 00:03:26.824 --rc geninfo_unexecuted_blocks=1 00:03:26.824 00:03:26.824 ' 00:03:26.824 13:07:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.824 --rc genhtml_branch_coverage=1 00:03:26.824 --rc genhtml_function_coverage=1 00:03:26.824 --rc genhtml_legend=1 00:03:26.824 --rc geninfo_all_blocks=1 00:03:26.824 --rc geninfo_unexecuted_blocks=1 00:03:26.824 00:03:26.824 ' 00:03:26.824 13:07:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.824 --rc genhtml_branch_coverage=1 00:03:26.824 --rc genhtml_function_coverage=1 00:03:26.824 --rc genhtml_legend=1 00:03:26.824 --rc geninfo_all_blocks=1 00:03:26.824 --rc geninfo_unexecuted_blocks=1 00:03:26.824 00:03:26.824 ' 00:03:26.824 13:07:41 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:26.824 13:07:41 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:26.824 13:07:41 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:26.824 13:07:41 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:26.824 13:07:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:26.824 13:07:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:26.824 13:07:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:26.824 13:07:41 -- setup/acl.sh@12 -- # devs=() 00:03:26.824 13:07:41 -- setup/acl.sh@12 -- # declare -a devs 00:03:26.824 13:07:41 -- setup/acl.sh@13 -- # drivers=() 00:03:26.824 13:07:41 -- setup/acl.sh@13 -- # declare -A drivers 00:03:26.824 13:07:41 -- setup/acl.sh@51 -- # setup reset 00:03:26.824 13:07:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.824 13:07:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.211 13:07:42 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.211 13:07:42 -- setup/acl.sh@16 -- # local dev driver 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.211 13:07:42 -- setup/acl.sh@15 -- # setup output status 00:03:28.211 13:07:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.211 13:07:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:28.211 Hugepages 00:03:28.211 node hugesize free / total 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # continue 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.211 00:03:28.211 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # continue 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:28.211 13:07:42 -- setup/acl.sh@20 -- # continue 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.211 13:07:42 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.211 13:07:42 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.211 13:07:42 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.211 13:07:42 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.211 13:07:42 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.211 13:07:42 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:28.211 13:07:42 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.211 13:07:42 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.211 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.472 13:07:42 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:28.472 13:07:42 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.472 13:07:42 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:28.472 13:07:42 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.472 13:07:42 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.472 13:07:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.472 13:07:42 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:28.472 13:07:42 -- setup/acl.sh@54 -- # run_test denied denied 00:03:28.472 13:07:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.472 13:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.472 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:28.472 ************************************ 00:03:28.472 START TEST denied 00:03:28.472 ************************************ 00:03:28.472 13:07:42 -- common/autotest_common.sh@1114 -- # denied 00:03:28.472 13:07:42 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:28.472 13:07:42 -- setup/acl.sh@38 -- # setup output config 00:03:28.472 13:07:42 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:28.472 13:07:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.472 13:07:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:29.860 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:29.860 13:07:44 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:29.860 13:07:44 -- setup/acl.sh@28 -- # local dev driver 00:03:29.860 13:07:44 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.860 13:07:44 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:29.860 13:07:44 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:29.860 13:07:44 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.860 13:07:44 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.860 13:07:44 -- setup/acl.sh@41 -- # setup reset 00:03:29.860 13:07:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.860 13:07:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.469 00:03:36.469 real 0m7.124s 00:03:36.469 user 0m0.704s 00:03:36.469 sys 0m1.254s 00:03:36.469 13:07:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:36.469 ************************************ 00:03:36.469 END TEST denied 00:03:36.469 ************************************ 00:03:36.469 13:07:49 -- common/autotest_common.sh@10 -- # set +x 00:03:36.469 13:07:50 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:36.469 13:07:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.469 13:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.469 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:36.469 ************************************ 00:03:36.469 START TEST allowed 00:03:36.469 ************************************ 00:03:36.469 13:07:50 -- common/autotest_common.sh@1114 -- # allowed 00:03:36.469 13:07:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:36.469 13:07:50 -- setup/acl.sh@45 -- # setup output config 00:03:36.469 13:07:50 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:36.469 13:07:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.469 13:07:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:36.730 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:36.730 13:07:51 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:36.730 13:07:51 -- setup/acl.sh@28 -- # local dev driver 00:03:36.731 13:07:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.731 13:07:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:36.731 13:07:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:36.731 13:07:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.731 13:07:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.731 13:07:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.731 13:07:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:36.731 13:07:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:36.731 13:07:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.731 13:07:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.731 13:07:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.731 13:07:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:36.731 13:07:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:36.731 13:07:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.731 13:07:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.731 13:07:51 -- setup/acl.sh@48 -- # setup reset 00:03:36.731 13:07:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.731 13:07:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.675 00:03:37.675 real 0m2.138s 00:03:37.675 user 0m0.815s 00:03:37.675 sys 0m1.060s 00:03:37.675 13:07:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.675 13:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.675 ************************************ 00:03:37.675 END TEST allowed 00:03:37.675 ************************************ 00:03:37.675 00:03:37.675 real 0m11.113s 00:03:37.675 user 0m2.223s 00:03:37.675 sys 0m3.293s 00:03:37.675 ************************************ 00:03:37.675 END TEST acl 00:03:37.675 ************************************ 00:03:37.675 13:07:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.675 13:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.938 13:07:52 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:37.938 13:07:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.938 13:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.938 13:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.938 ************************************ 00:03:37.938 START TEST hugepages 00:03:37.938 ************************************ 00:03:37.938 13:07:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:37.938 * Looking for test storage... 00:03:37.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.938 13:07:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:37.938 13:07:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:37.938 13:07:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:37.938 13:07:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:37.938 13:07:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:37.938 13:07:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:37.938 13:07:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:37.938 13:07:52 -- scripts/common.sh@335 -- # IFS=.-: 00:03:37.938 13:07:52 -- scripts/common.sh@335 -- # read -ra ver1 00:03:37.938 13:07:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.938 13:07:52 -- scripts/common.sh@336 -- # read -ra ver2 00:03:37.938 13:07:52 -- scripts/common.sh@337 -- # local 'op=<' 00:03:37.938 13:07:52 -- scripts/common.sh@339 -- # ver1_l=2 00:03:37.938 13:07:52 -- scripts/common.sh@340 -- # ver2_l=1 00:03:37.938 13:07:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:37.938 13:07:52 -- scripts/common.sh@343 -- # case "$op" in 00:03:37.938 13:07:52 -- scripts/common.sh@344 -- # : 1 00:03:37.939 13:07:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:37.939 13:07:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.939 13:07:52 -- scripts/common.sh@364 -- # decimal 1 00:03:37.939 13:07:52 -- scripts/common.sh@352 -- # local d=1 00:03:37.939 13:07:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.939 13:07:52 -- scripts/common.sh@354 -- # echo 1 00:03:37.939 13:07:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:37.939 13:07:52 -- scripts/common.sh@365 -- # decimal 2 00:03:37.939 13:07:52 -- scripts/common.sh@352 -- # local d=2 00:03:37.939 13:07:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.939 13:07:52 -- scripts/common.sh@354 -- # echo 2 00:03:37.939 13:07:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:37.939 13:07:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:37.939 13:07:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:37.939 13:07:52 -- scripts/common.sh@367 -- # return 0 00:03:37.939 13:07:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.939 13:07:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.939 --rc genhtml_branch_coverage=1 00:03:37.939 --rc genhtml_function_coverage=1 00:03:37.939 --rc genhtml_legend=1 00:03:37.939 --rc geninfo_all_blocks=1 00:03:37.939 --rc geninfo_unexecuted_blocks=1 00:03:37.939 00:03:37.939 ' 00:03:37.939 13:07:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.939 --rc genhtml_branch_coverage=1 00:03:37.939 --rc genhtml_function_coverage=1 00:03:37.939 --rc genhtml_legend=1 00:03:37.939 --rc geninfo_all_blocks=1 00:03:37.939 --rc geninfo_unexecuted_blocks=1 00:03:37.939 00:03:37.939 ' 00:03:37.939 13:07:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.939 --rc genhtml_branch_coverage=1 00:03:37.939 --rc genhtml_function_coverage=1 00:03:37.939 --rc genhtml_legend=1 00:03:37.939 --rc geninfo_all_blocks=1 00:03:37.939 --rc geninfo_unexecuted_blocks=1 00:03:37.939 00:03:37.939 ' 00:03:37.939 13:07:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.939 --rc genhtml_branch_coverage=1 00:03:37.939 --rc genhtml_function_coverage=1 00:03:37.939 --rc genhtml_legend=1 00:03:37.939 --rc geninfo_all_blocks=1 00:03:37.939 --rc geninfo_unexecuted_blocks=1 00:03:37.939 00:03:37.939 ' 00:03:37.939 13:07:52 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:37.939 13:07:52 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:37.939 13:07:52 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:37.939 13:07:52 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:37.939 13:07:52 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:37.939 13:07:52 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:37.939 13:07:52 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:37.939 13:07:52 -- setup/common.sh@18 -- # local node= 00:03:37.939 13:07:52 -- setup/common.sh@19 -- # local var val 00:03:37.939 13:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.939 13:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.939 13:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.939 13:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.939 13:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.939 13:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5795556 kB' 'MemAvailable: 7352700 kB' 'Buffers: 3704 kB' 'Cached: 1768988 kB' 'SwapCached: 0 kB' 'Active: 465120 kB' 'Inactive: 1422952 kB' 'Active(anon): 125912 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117052 kB' 'Mapped: 50516 kB' 'Shmem: 10532 kB' 'KReclaimable: 63824 kB' 'Slab: 162140 kB' 'SReclaimable: 63824 kB' 'SUnreclaim: 98316 kB' 'KernelStack: 6496 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 311308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.939 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.939 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # continue 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.940 13:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.940 13:07:52 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.940 13:07:52 -- setup/common.sh@33 -- # echo 2048 00:03:37.940 13:07:52 -- setup/common.sh@33 -- # return 0 00:03:37.940 13:07:52 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:37.940 13:07:52 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:37.940 13:07:52 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:37.940 13:07:52 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:37.940 13:07:52 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:37.940 13:07:52 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:37.940 13:07:52 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:37.940 13:07:52 -- setup/hugepages.sh@207 -- # get_nodes 00:03:37.940 13:07:52 -- setup/hugepages.sh@27 -- # local node 00:03:37.940 13:07:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.940 13:07:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:37.940 13:07:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.940 13:07:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.940 13:07:52 -- setup/hugepages.sh@208 -- # clear_hp 00:03:37.940 13:07:52 -- setup/hugepages.sh@37 -- # local node hp 00:03:37.940 13:07:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.940 13:07:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.940 13:07:52 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.940 13:07:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.940 13:07:52 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.940 13:07:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:37.940 13:07:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:37.940 13:07:52 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:37.940 13:07:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.940 13:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.940 13:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.940 ************************************ 00:03:37.940 START TEST default_setup 00:03:37.940 ************************************ 00:03:37.940 13:07:52 -- common/autotest_common.sh@1114 -- # default_setup 00:03:37.940 13:07:52 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:37.940 13:07:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.940 13:07:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:37.940 13:07:52 -- setup/hugepages.sh@51 -- # shift 00:03:37.940 13:07:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:37.940 13:07:52 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.940 13:07:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.940 13:07:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.940 13:07:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:37.940 13:07:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:37.940 13:07:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.940 13:07:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.940 13:07:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:37.940 13:07:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.940 13:07:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.940 13:07:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:37.940 13:07:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.940 13:07:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:37.940 13:07:52 -- setup/hugepages.sh@73 -- # return 0 00:03:37.941 13:07:52 -- setup/hugepages.sh@137 -- # setup output 00:03:37.941 13:07:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.941 13:07:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:38.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.223 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.223 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.223 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.223 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.223 13:07:53 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:39.223 13:07:53 -- setup/hugepages.sh@89 -- # local node 00:03:39.223 13:07:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.223 13:07:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.223 13:07:53 -- setup/hugepages.sh@92 -- # local surp 00:03:39.223 13:07:53 -- setup/hugepages.sh@93 -- # local resv 00:03:39.223 13:07:53 -- setup/hugepages.sh@94 -- # local anon 00:03:39.223 13:07:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.223 13:07:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.223 13:07:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.223 13:07:53 -- setup/common.sh@18 -- # local node= 00:03:39.223 13:07:53 -- setup/common.sh@19 -- # local var val 00:03:39.223 13:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.223 13:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.223 13:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.223 13:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.223 13:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.223 13:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915656 kB' 'MemAvailable: 9472632 kB' 'Buffers: 3704 kB' 'Cached: 1768972 kB' 'SwapCached: 0 kB' 'Active: 467436 kB' 'Inactive: 1422968 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119140 kB' 'Mapped: 50636 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161832 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98380 kB' 'KernelStack: 6448 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.223 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.223 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.224 13:07:53 -- setup/common.sh@33 -- # echo 0 00:03:39.224 13:07:53 -- setup/common.sh@33 -- # return 0 00:03:39.224 13:07:53 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.224 13:07:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.224 13:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.224 13:07:53 -- setup/common.sh@18 -- # local node= 00:03:39.224 13:07:53 -- setup/common.sh@19 -- # local var val 00:03:39.224 13:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.224 13:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.224 13:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.224 13:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.224 13:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.224 13:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915956 kB' 'MemAvailable: 9472932 kB' 'Buffers: 3704 kB' 'Cached: 1768972 kB' 'SwapCached: 0 kB' 'Active: 467444 kB' 'Inactive: 1422968 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50636 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161840 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98388 kB' 'KernelStack: 6432 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.224 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.224 13:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.225 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.225 13:07:53 -- setup/common.sh@33 -- # echo 0 00:03:39.225 13:07:53 -- setup/common.sh@33 -- # return 0 00:03:39.225 13:07:53 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.225 13:07:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.225 13:07:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.225 13:07:53 -- setup/common.sh@18 -- # local node= 00:03:39.225 13:07:53 -- setup/common.sh@19 -- # local var val 00:03:39.225 13:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.225 13:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.225 13:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.225 13:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.225 13:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.225 13:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.225 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7916792 kB' 'MemAvailable: 9473776 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467292 kB' 'Inactive: 1422976 kB' 'Active(anon): 128084 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119228 kB' 'Mapped: 50500 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161820 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98368 kB' 'KernelStack: 6480 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.226 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.226 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.227 13:07:53 -- setup/common.sh@33 -- # echo 0 00:03:39.227 13:07:53 -- setup/common.sh@33 -- # return 0 00:03:39.227 nr_hugepages=1024 00:03:39.227 13:07:53 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.227 13:07:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.227 resv_hugepages=0 00:03:39.227 surplus_hugepages=0 00:03:39.227 13:07:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.227 13:07:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.227 anon_hugepages=0 00:03:39.227 13:07:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.227 13:07:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.227 13:07:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.227 13:07:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.227 13:07:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.227 13:07:53 -- setup/common.sh@18 -- # local node= 00:03:39.227 13:07:53 -- setup/common.sh@19 -- # local var val 00:03:39.227 13:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.227 13:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.227 13:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.227 13:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.227 13:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.227 13:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7917156 kB' 'MemAvailable: 9474140 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467056 kB' 'Inactive: 1422976 kB' 'Active(anon): 127848 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118940 kB' 'Mapped: 50500 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161820 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98368 kB' 'KernelStack: 6464 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.227 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.227 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.228 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.228 13:07:53 -- setup/common.sh@33 -- # echo 1024 00:03:39.228 13:07:53 -- setup/common.sh@33 -- # return 0 00:03:39.228 13:07:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.228 13:07:53 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.228 13:07:53 -- setup/hugepages.sh@27 -- # local node 00:03:39.228 13:07:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.228 13:07:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.228 13:07:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:39.228 13:07:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.228 13:07:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.228 13:07:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.228 13:07:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.228 13:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.228 13:07:53 -- setup/common.sh@18 -- # local node=0 00:03:39.228 13:07:53 -- setup/common.sh@19 -- # local var val 00:03:39.228 13:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.228 13:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.228 13:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.228 13:07:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.228 13:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.228 13:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.228 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7917156 kB' 'MemUsed: 4319940 kB' 'SwapCached: 0 kB' 'Active: 467252 kB' 'Inactive: 1422976 kB' 'Active(anon): 128044 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 50500 kB' 'AnonPages: 119136 kB' 'Shmem: 10492 kB' 'KernelStack: 6448 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63452 kB' 'Slab: 161820 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # continue 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.229 13:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.229 13:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.229 13:07:53 -- setup/common.sh@33 -- # echo 0 00:03:39.229 13:07:53 -- setup/common.sh@33 -- # return 0 00:03:39.229 13:07:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.229 13:07:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.229 13:07:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.229 13:07:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.229 node0=1024 expecting 1024 00:03:39.229 13:07:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.229 13:07:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.229 00:03:39.229 real 0m1.262s 00:03:39.229 user 0m0.499s 00:03:39.229 sys 0m0.608s 00:03:39.229 13:07:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.229 ************************************ 00:03:39.229 END TEST default_setup 00:03:39.229 ************************************ 00:03:39.229 13:07:53 -- common/autotest_common.sh@10 -- # set +x 00:03:39.490 13:07:53 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:39.490 13:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.490 13:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.490 13:07:53 -- common/autotest_common.sh@10 -- # set +x 00:03:39.490 ************************************ 00:03:39.490 START TEST per_node_1G_alloc 00:03:39.490 ************************************ 00:03:39.490 13:07:53 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:39.490 13:07:53 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:39.490 13:07:53 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:39.490 13:07:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.490 13:07:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:39.490 13:07:53 -- setup/hugepages.sh@51 -- # shift 00:03:39.490 13:07:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:39.490 13:07:53 -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.490 13:07:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.490 13:07:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.490 13:07:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:39.490 13:07:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:39.490 13:07:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.490 13:07:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.490 13:07:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:39.490 13:07:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.490 13:07:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.490 13:07:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:39.490 13:07:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.490 13:07:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.490 13:07:53 -- setup/hugepages.sh@73 -- # return 0 00:03:39.490 13:07:53 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:39.490 13:07:53 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:39.490 13:07:53 -- setup/hugepages.sh@146 -- # setup output 00:03:39.490 13:07:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.490 13:07:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.751 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.751 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.751 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:39.751 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:40.015 13:07:54 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:40.015 13:07:54 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.015 13:07:54 -- setup/hugepages.sh@89 -- # local node 00:03:40.015 13:07:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.015 13:07:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.015 13:07:54 -- setup/hugepages.sh@92 -- # local surp 00:03:40.015 13:07:54 -- setup/hugepages.sh@93 -- # local resv 00:03:40.015 13:07:54 -- setup/hugepages.sh@94 -- # local anon 00:03:40.015 13:07:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.015 13:07:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.015 13:07:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.015 13:07:54 -- setup/common.sh@18 -- # local node= 00:03:40.015 13:07:54 -- setup/common.sh@19 -- # local var val 00:03:40.015 13:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.015 13:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.015 13:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.015 13:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.015 13:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.015 13:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8969724 kB' 'MemAvailable: 10526712 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467956 kB' 'Inactive: 1422980 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50628 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161660 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98208 kB' 'KernelStack: 6480 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 13:07:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.016 13:07:54 -- setup/common.sh@33 -- # echo 0 00:03:40.016 13:07:54 -- setup/common.sh@33 -- # return 0 00:03:40.016 13:07:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.016 13:07:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.016 13:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.016 13:07:54 -- setup/common.sh@18 -- # local node= 00:03:40.016 13:07:54 -- setup/common.sh@19 -- # local var val 00:03:40.016 13:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.016 13:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.016 13:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.016 13:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.016 13:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.016 13:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8969724 kB' 'MemAvailable: 10526712 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467756 kB' 'Inactive: 1422980 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 50628 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161652 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98200 kB' 'KernelStack: 6432 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.016 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.017 13:07:54 -- setup/common.sh@33 -- # echo 0 00:03:40.017 13:07:54 -- setup/common.sh@33 -- # return 0 00:03:40.017 13:07:54 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.017 13:07:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.017 13:07:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.017 13:07:54 -- setup/common.sh@18 -- # local node= 00:03:40.017 13:07:54 -- setup/common.sh@19 -- # local var val 00:03:40.017 13:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.017 13:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.017 13:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.017 13:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.017 13:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.017 13:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8969724 kB' 'MemAvailable: 10526712 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467336 kB' 'Inactive: 1422980 kB' 'Active(anon): 128128 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119248 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161656 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98204 kB' 'KernelStack: 6496 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 13:07:54 -- setup/common.sh@33 -- # echo 0 00:03:40.017 13:07:54 -- setup/common.sh@33 -- # return 0 00:03:40.017 nr_hugepages=512 00:03:40.017 resv_hugepages=0 00:03:40.017 surplus_hugepages=0 00:03:40.017 anon_hugepages=0 00:03:40.017 13:07:54 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.017 13:07:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:40.017 13:07:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.017 13:07:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.017 13:07:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.017 13:07:54 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:40.017 13:07:54 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:40.018 13:07:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.018 13:07:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.018 13:07:54 -- setup/common.sh@18 -- # local node= 00:03:40.018 13:07:54 -- setup/common.sh@19 -- # local var val 00:03:40.018 13:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.018 13:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.018 13:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.018 13:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.018 13:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.018 13:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8969724 kB' 'MemAvailable: 10526712 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467400 kB' 'Inactive: 1422980 kB' 'Active(anon): 128192 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119364 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161656 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98204 kB' 'KernelStack: 6528 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 13:07:54 -- setup/common.sh@33 -- # echo 512 00:03:40.019 13:07:54 -- setup/common.sh@33 -- # return 0 00:03:40.019 13:07:54 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.019 13:07:54 -- setup/hugepages.sh@27 -- # local node 00:03:40.019 13:07:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.019 13:07:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.019 13:07:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:40.019 13:07:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.019 13:07:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.019 13:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.019 13:07:54 -- setup/common.sh@18 -- # local node=0 00:03:40.019 13:07:54 -- setup/common.sh@19 -- # local var val 00:03:40.019 13:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.019 13:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.019 13:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.019 13:07:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.019 13:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.019 13:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8969724 kB' 'MemUsed: 3267372 kB' 'SwapCached: 0 kB' 'Active: 467712 kB' 'Inactive: 1422980 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 50568 kB' 'AnonPages: 119684 kB' 'Shmem: 10492 kB' 'KernelStack: 6528 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63452 kB' 'Slab: 161656 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 13:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.019 13:07:54 -- setup/common.sh@33 -- # echo 0 00:03:40.019 13:07:54 -- setup/common.sh@33 -- # return 0 00:03:40.019 13:07:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.019 node0=512 expecting 512 00:03:40.019 13:07:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.019 13:07:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.019 13:07:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.019 13:07:54 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.019 00:03:40.019 real 0m0.609s 00:03:40.019 user 0m0.243s 00:03:40.019 sys 0m0.375s 00:03:40.019 13:07:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:40.019 13:07:54 -- common/autotest_common.sh@10 -- # set +x 00:03:40.019 ************************************ 00:03:40.019 END TEST per_node_1G_alloc 00:03:40.019 ************************************ 00:03:40.019 13:07:54 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.019 13:07:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.019 13:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.019 13:07:54 -- common/autotest_common.sh@10 -- # set +x 00:03:40.019 ************************************ 00:03:40.019 START TEST even_2G_alloc 00:03:40.019 ************************************ 00:03:40.019 13:07:54 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:40.019 13:07:54 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.019 13:07:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.019 13:07:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.019 13:07:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.019 13:07:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.019 13:07:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.019 13:07:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.019 13:07:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:40.019 13:07:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.019 13:07:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.019 13:07:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:40.019 13:07:54 -- setup/hugepages.sh@83 -- # : 0 00:03:40.019 13:07:54 -- setup/hugepages.sh@84 -- # : 0 00:03:40.019 13:07:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.019 13:07:54 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.019 13:07:54 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.019 13:07:54 -- setup/hugepages.sh@153 -- # setup output 00:03:40.019 13:07:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.019 13:07:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:40.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.593 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:40.593 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:40.593 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:40.593 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:40.593 13:07:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:40.593 13:07:54 -- setup/hugepages.sh@89 -- # local node 00:03:40.593 13:07:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.593 13:07:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.593 13:07:54 -- setup/hugepages.sh@92 -- # local surp 00:03:40.593 13:07:54 -- setup/hugepages.sh@93 -- # local resv 00:03:40.593 13:07:54 -- setup/hugepages.sh@94 -- # local anon 00:03:40.593 13:07:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.593 13:07:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.593 13:07:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.593 13:07:54 -- setup/common.sh@18 -- # local node= 00:03:40.593 13:07:54 -- setup/common.sh@19 -- # local var val 00:03:40.593 13:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.593 13:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.593 13:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.593 13:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.593 13:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.593 13:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7931996 kB' 'MemAvailable: 9488984 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467556 kB' 'Inactive: 1422980 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119428 kB' 'Mapped: 50612 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161816 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98364 kB' 'KernelStack: 6448 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:54 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.593 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.593 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.594 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:40.594 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:40.594 13:07:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.594 13:07:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.594 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.594 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:40.594 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:40.594 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.594 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.594 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.594 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.594 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.594 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7931996 kB' 'MemAvailable: 9488984 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467416 kB' 'Inactive: 1422980 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119288 kB' 'Mapped: 50612 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161816 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98364 kB' 'KernelStack: 6464 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.594 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.594 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.595 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.595 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.596 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:40.596 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:40.596 13:07:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.596 13:07:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.596 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.596 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:40.596 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:40.596 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.596 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.596 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.596 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.596 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.596 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7931744 kB' 'MemAvailable: 9488732 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467372 kB' 'Inactive: 1422980 kB' 'Active(anon): 128164 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119256 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161836 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98384 kB' 'KernelStack: 6480 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.596 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.596 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.597 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:40.597 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:40.597 13:07:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.597 nr_hugepages=1024 00:03:40.597 13:07:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.597 resv_hugepages=0 00:03:40.597 13:07:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.597 surplus_hugepages=0 00:03:40.597 13:07:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.597 13:07:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.597 anon_hugepages=0 00:03:40.597 13:07:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.597 13:07:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.597 13:07:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.597 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.597 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:40.597 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:40.597 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.597 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.597 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.597 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.597 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.597 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7931744 kB' 'MemAvailable: 9488732 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467120 kB' 'Inactive: 1422980 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118960 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161836 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98384 kB' 'KernelStack: 6464 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.597 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.597 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.598 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.598 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.599 13:07:55 -- setup/common.sh@33 -- # echo 1024 00:03:40.599 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:40.599 13:07:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.599 13:07:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.599 13:07:55 -- setup/hugepages.sh@27 -- # local node 00:03:40.599 13:07:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.599 13:07:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.599 13:07:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:40.599 13:07:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.599 13:07:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.599 13:07:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.599 13:07:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.599 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.599 13:07:55 -- setup/common.sh@18 -- # local node=0 00:03:40.599 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:40.599 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.599 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.599 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.599 13:07:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.599 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.599 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7931744 kB' 'MemUsed: 4305352 kB' 'SwapCached: 0 kB' 'Active: 467120 kB' 'Inactive: 1422980 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 50516 kB' 'AnonPages: 118960 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63452 kB' 'Slab: 161836 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.599 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.599 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # continue 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.600 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.600 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.600 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:40.600 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:40.600 13:07:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.600 13:07:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.600 13:07:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.600 13:07:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.600 node0=1024 expecting 1024 00:03:40.600 13:07:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.600 00:03:40.600 real 0m0.575s 00:03:40.600 user 0m0.262s 00:03:40.600 sys 0m0.334s 00:03:40.600 ************************************ 00:03:40.600 END TEST even_2G_alloc 00:03:40.600 ************************************ 00:03:40.600 13:07:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:40.600 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:03:40.600 13:07:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:40.600 13:07:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.600 13:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.600 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:03:40.600 ************************************ 00:03:40.600 START TEST odd_alloc 00:03:40.600 ************************************ 00:03:40.600 13:07:55 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:40.600 13:07:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:40.600 13:07:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:40.600 13:07:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:40.600 13:07:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.600 13:07:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.600 13:07:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.600 13:07:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:40.600 13:07:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:40.600 13:07:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.600 13:07:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.600 13:07:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:40.600 13:07:55 -- setup/hugepages.sh@83 -- # : 0 00:03:40.600 13:07:55 -- setup/hugepages.sh@84 -- # : 0 00:03:40.600 13:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.600 13:07:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:40.600 13:07:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:40.600 13:07:55 -- setup/hugepages.sh@160 -- # setup output 00:03:40.600 13:07:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.600 13:07:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.175 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.175 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.175 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.175 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.175 13:07:55 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:41.175 13:07:55 -- setup/hugepages.sh@89 -- # local node 00:03:41.175 13:07:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.175 13:07:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.175 13:07:55 -- setup/hugepages.sh@92 -- # local surp 00:03:41.175 13:07:55 -- setup/hugepages.sh@93 -- # local resv 00:03:41.175 13:07:55 -- setup/hugepages.sh@94 -- # local anon 00:03:41.175 13:07:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.175 13:07:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.175 13:07:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.175 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:41.175 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:41.175 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.175 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.175 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.175 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.175 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.175 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7923740 kB' 'MemAvailable: 9480728 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467460 kB' 'Inactive: 1422980 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119296 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161968 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98516 kB' 'KernelStack: 6480 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 318580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.175 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.175 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.176 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:41.176 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:41.176 13:07:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.176 13:07:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.176 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.176 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:41.176 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:41.176 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.176 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.176 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.176 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.176 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.176 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7923740 kB' 'MemAvailable: 9480728 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467528 kB' 'Inactive: 1422980 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161968 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98516 kB' 'KernelStack: 6496 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 318572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.176 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.176 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.177 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.177 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.178 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:41.178 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:41.178 13:07:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.178 13:07:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.178 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.178 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:41.178 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:41.178 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.178 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.178 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.178 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.178 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.178 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924000 kB' 'MemAvailable: 9480988 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467556 kB' 'Inactive: 1422980 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161956 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98504 kB' 'KernelStack: 6500 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.178 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.178 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.179 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:41.179 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:41.179 nr_hugepages=1025 00:03:41.179 resv_hugepages=0 00:03:41.179 surplus_hugepages=0 00:03:41.179 anon_hugepages=0 00:03:41.179 13:07:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.179 13:07:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:41.179 13:07:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.179 13:07:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.179 13:07:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.179 13:07:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:41.179 13:07:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:41.179 13:07:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.179 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.179 13:07:55 -- setup/common.sh@18 -- # local node= 00:03:41.179 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:41.179 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.179 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.179 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.179 13:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.179 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.179 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924244 kB' 'MemAvailable: 9481232 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467352 kB' 'Inactive: 1422980 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119244 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161944 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98492 kB' 'KernelStack: 6464 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.179 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.179 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.180 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.180 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.181 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.181 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.181 13:07:55 -- setup/common.sh@33 -- # echo 1025 00:03:41.181 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:41.181 13:07:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:41.181 13:07:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.181 13:07:55 -- setup/hugepages.sh@27 -- # local node 00:03:41.181 13:07:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.181 13:07:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:41.181 13:07:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.181 13:07:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.181 13:07:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.181 13:07:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.181 13:07:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.181 13:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.181 13:07:55 -- setup/common.sh@18 -- # local node=0 00:03:41.181 13:07:55 -- setup/common.sh@19 -- # local var val 00:03:41.181 13:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.181 13:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.181 13:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.181 13:07:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.181 13:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.181 13:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.442 13:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924244 kB' 'MemUsed: 4312852 kB' 'SwapCached: 0 kB' 'Active: 467420 kB' 'Inactive: 1422980 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 50516 kB' 'AnonPages: 119324 kB' 'Shmem: 10492 kB' 'KernelStack: 6480 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63452 kB' 'Slab: 161944 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.442 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.442 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.443 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.443 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.443 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.443 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # continue 00:03:41.443 13:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.443 13:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.443 13:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.443 13:07:55 -- setup/common.sh@33 -- # echo 0 00:03:41.443 13:07:55 -- setup/common.sh@33 -- # return 0 00:03:41.443 13:07:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.443 13:07:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.443 node0=1025 expecting 1025 00:03:41.443 13:07:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.443 13:07:55 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:41.443 13:07:55 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:41.443 00:03:41.443 real 0m0.600s 00:03:41.443 user 0m0.226s 00:03:41.443 sys 0m0.400s 00:03:41.443 13:07:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:41.443 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.443 ************************************ 00:03:41.443 END TEST odd_alloc 00:03:41.443 ************************************ 00:03:41.443 13:07:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:41.443 13:07:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.443 13:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.443 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.443 ************************************ 00:03:41.443 START TEST custom_alloc 00:03:41.443 ************************************ 00:03:41.443 13:07:55 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:41.443 13:07:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:41.443 13:07:55 -- setup/hugepages.sh@169 -- # local node 00:03:41.443 13:07:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:41.443 13:07:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:41.443 13:07:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:41.443 13:07:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:41.443 13:07:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:41.443 13:07:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:41.443 13:07:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.443 13:07:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.443 13:07:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.443 13:07:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:41.443 13:07:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.443 13:07:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.443 13:07:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.443 13:07:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.443 13:07:55 -- setup/hugepages.sh@83 -- # : 0 00:03:41.443 13:07:55 -- setup/hugepages.sh@84 -- # : 0 00:03:41.443 13:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:41.443 13:07:55 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:41.443 13:07:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:41.443 13:07:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:41.443 13:07:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.443 13:07:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.443 13:07:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:41.443 13:07:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.443 13:07:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.443 13:07:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.443 13:07:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:41.443 13:07:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:41.443 13:07:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:41.443 13:07:55 -- setup/hugepages.sh@78 -- # return 0 00:03:41.443 13:07:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:41.443 13:07:55 -- setup/hugepages.sh@187 -- # setup output 00:03:41.443 13:07:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.443 13:07:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.704 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.704 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.704 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.704 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.969 13:07:56 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:41.969 13:07:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:41.969 13:07:56 -- setup/hugepages.sh@89 -- # local node 00:03:41.969 13:07:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.969 13:07:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.969 13:07:56 -- setup/hugepages.sh@92 -- # local surp 00:03:41.969 13:07:56 -- setup/hugepages.sh@93 -- # local resv 00:03:41.969 13:07:56 -- setup/hugepages.sh@94 -- # local anon 00:03:41.969 13:07:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.969 13:07:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.969 13:07:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.969 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:41.969 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:41.969 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.969 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.969 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.969 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.969 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.969 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8975392 kB' 'MemAvailable: 10532380 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467696 kB' 'Inactive: 1422980 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 50696 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 162008 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98556 kB' 'KernelStack: 6500 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.969 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.969 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.970 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.970 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.970 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:41.970 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:41.970 13:07:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.970 13:07:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.970 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.970 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:41.970 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:41.970 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.970 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.970 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.970 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.970 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.970 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.970 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8974136 kB' 'MemAvailable: 10531124 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467420 kB' 'Inactive: 1422980 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119556 kB' 'Mapped: 50704 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 162004 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98552 kB' 'KernelStack: 6452 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.971 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.971 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.972 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:41.972 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:41.972 13:07:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.972 13:07:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.972 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.972 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:41.972 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:41.972 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.972 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.972 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.972 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.972 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.972 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8974136 kB' 'MemAvailable: 10531124 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467680 kB' 'Inactive: 1422980 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119556 kB' 'Mapped: 50704 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 162004 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98552 kB' 'KernelStack: 6452 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.972 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.972 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.973 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:41.973 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:41.973 nr_hugepages=512 00:03:41.973 resv_hugepages=0 00:03:41.973 surplus_hugepages=0 00:03:41.973 13:07:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.973 13:07:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:41.973 13:07:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.973 13:07:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.973 anon_hugepages=0 00:03:41.973 13:07:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.973 13:07:56 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:41.973 13:07:56 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:41.973 13:07:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.973 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.973 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:41.973 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:41.973 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.973 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.973 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.973 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.973 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.973 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8974136 kB' 'MemAvailable: 10531124 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467440 kB' 'Inactive: 1422980 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161980 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98528 kB' 'KernelStack: 6464 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 318940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.973 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.973 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.974 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.974 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.975 13:07:56 -- setup/common.sh@33 -- # echo 512 00:03:41.975 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:41.975 13:07:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:41.975 13:07:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.975 13:07:56 -- setup/hugepages.sh@27 -- # local node 00:03:41.975 13:07:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.975 13:07:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.975 13:07:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.975 13:07:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.975 13:07:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.975 13:07:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.975 13:07:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.975 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.975 13:07:56 -- setup/common.sh@18 -- # local node=0 00:03:41.975 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:41.975 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.975 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.975 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.975 13:07:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.975 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.975 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8974136 kB' 'MemUsed: 3262960 kB' 'SwapCached: 0 kB' 'Active: 467416 kB' 'Inactive: 1422980 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 50516 kB' 'AnonPages: 119296 kB' 'Shmem: 10492 kB' 'KernelStack: 6432 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63452 kB' 'Slab: 161948 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.975 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.975 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # continue 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.976 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.976 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.976 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:41.976 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:41.976 13:07:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.976 13:07:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.976 node0=512 expecting 512 00:03:41.976 13:07:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.976 13:07:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.976 13:07:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.976 ************************************ 00:03:41.976 END TEST custom_alloc 00:03:41.976 ************************************ 00:03:41.976 13:07:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.976 00:03:41.976 real 0m0.564s 00:03:41.976 user 0m0.241s 00:03:41.976 sys 0m0.342s 00:03:41.976 13:07:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:41.976 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:03:41.976 13:07:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:41.976 13:07:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.976 13:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.976 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:03:41.976 ************************************ 00:03:41.976 START TEST no_shrink_alloc 00:03:41.976 ************************************ 00:03:41.976 13:07:56 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:41.976 13:07:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:41.976 13:07:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.976 13:07:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.976 13:07:56 -- setup/hugepages.sh@51 -- # shift 00:03:41.976 13:07:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.976 13:07:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.976 13:07:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.976 13:07:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.976 13:07:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.976 13:07:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.976 13:07:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.976 13:07:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.976 13:07:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.976 13:07:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.976 13:07:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.976 13:07:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.976 13:07:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.976 13:07:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:41.976 13:07:56 -- setup/hugepages.sh@73 -- # return 0 00:03:41.976 13:07:56 -- setup/hugepages.sh@198 -- # setup output 00:03:41.976 13:07:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.976 13:07:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.552 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.552 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.552 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.552 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.552 13:07:56 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:42.552 13:07:56 -- setup/hugepages.sh@89 -- # local node 00:03:42.552 13:07:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.552 13:07:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.552 13:07:56 -- setup/hugepages.sh@92 -- # local surp 00:03:42.552 13:07:56 -- setup/hugepages.sh@93 -- # local resv 00:03:42.552 13:07:56 -- setup/hugepages.sh@94 -- # local anon 00:03:42.552 13:07:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.552 13:07:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.552 13:07:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.552 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:42.552 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:42.552 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.552 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.552 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.552 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.552 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.552 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.552 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.552 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918088 kB' 'MemAvailable: 9475076 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467556 kB' 'Inactive: 1422980 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119468 kB' 'Mapped: 50724 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161940 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98488 kB' 'KernelStack: 6560 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.553 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.553 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.554 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:42.554 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:42.554 13:07:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.554 13:07:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.554 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.554 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:42.554 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:42.554 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.554 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.554 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.554 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.554 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.554 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918088 kB' 'MemAvailable: 9475076 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467460 kB' 'Inactive: 1422980 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119344 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161940 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98488 kB' 'KernelStack: 6496 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.554 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.554 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.555 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:42.555 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:42.555 13:07:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.555 13:07:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.555 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.555 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:42.555 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:42.555 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.555 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.555 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.555 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.555 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.555 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918088 kB' 'MemAvailable: 9475076 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 467164 kB' 'Inactive: 1422980 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119044 kB' 'Mapped: 50516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161932 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98480 kB' 'KernelStack: 6480 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.555 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.555 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.556 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.556 13:07:56 -- setup/common.sh@33 -- # echo 0 00:03:42.556 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:42.556 13:07:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.556 nr_hugepages=1024 00:03:42.556 13:07:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.556 resv_hugepages=0 00:03:42.556 surplus_hugepages=0 00:03:42.556 13:07:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.556 13:07:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.556 anon_hugepages=0 00:03:42.556 13:07:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.556 13:07:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.556 13:07:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.556 13:07:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.556 13:07:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.556 13:07:56 -- setup/common.sh@18 -- # local node= 00:03:42.556 13:07:56 -- setup/common.sh@19 -- # local var val 00:03:42.556 13:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.556 13:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.556 13:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.556 13:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.556 13:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.556 13:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.556 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918088 kB' 'MemAvailable: 9475076 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 465508 kB' 'Inactive: 1422980 kB' 'Active(anon): 126300 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117388 kB' 'Mapped: 49736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63452 kB' 'Slab: 161920 kB' 'SReclaimable: 63452 kB' 'SUnreclaim: 98468 kB' 'KernelStack: 6464 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 305360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.557 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.557 13:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # continue 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.558 13:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.558 13:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.558 13:07:56 -- setup/common.sh@33 -- # echo 1024 00:03:42.558 13:07:56 -- setup/common.sh@33 -- # return 0 00:03:42.558 13:07:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.558 13:07:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.558 13:07:56 -- setup/hugepages.sh@27 -- # local node 00:03:42.558 13:07:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.558 13:07:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.558 13:07:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.558 13:07:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.558 13:07:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.558 13:07:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.559 13:07:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.559 13:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.559 13:07:57 -- setup/common.sh@18 -- # local node=0 00:03:42.559 13:07:57 -- setup/common.sh@19 -- # local var val 00:03:42.559 13:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.559 13:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.559 13:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.559 13:07:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.559 13:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.559 13:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918088 kB' 'MemUsed: 4319008 kB' 'SwapCached: 0 kB' 'Active: 465260 kB' 'Inactive: 1422980 kB' 'Active(anon): 126052 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 49668 kB' 'AnonPages: 117140 kB' 'Shmem: 10492 kB' 'KernelStack: 6420 kB' 'PageTables: 3440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63440 kB' 'Slab: 161664 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.559 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.559 13:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # continue 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.560 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.560 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.560 13:07:57 -- setup/common.sh@33 -- # echo 0 00:03:42.560 13:07:57 -- setup/common.sh@33 -- # return 0 00:03:42.560 13:07:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.560 13:07:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.560 13:07:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.560 node0=1024 expecting 1024 00:03:42.560 13:07:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.560 13:07:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.560 13:07:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.560 13:07:57 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:42.560 13:07:57 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:42.560 13:07:57 -- setup/hugepages.sh@202 -- # setup output 00:03:42.560 13:07:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.560 13:07:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.136 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.136 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.136 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.136 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.136 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:43.136 13:07:57 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:43.136 13:07:57 -- setup/hugepages.sh@89 -- # local node 00:03:43.136 13:07:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.136 13:07:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.136 13:07:57 -- setup/hugepages.sh@92 -- # local surp 00:03:43.136 13:07:57 -- setup/hugepages.sh@93 -- # local resv 00:03:43.136 13:07:57 -- setup/hugepages.sh@94 -- # local anon 00:03:43.136 13:07:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.136 13:07:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.136 13:07:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.136 13:07:57 -- setup/common.sh@18 -- # local node= 00:03:43.136 13:07:57 -- setup/common.sh@19 -- # local var val 00:03:43.136 13:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.136 13:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.136 13:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.136 13:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.136 13:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.136 13:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922628 kB' 'MemAvailable: 9479608 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 466108 kB' 'Inactive: 1422980 kB' 'Active(anon): 126900 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118040 kB' 'Mapped: 50060 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161548 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98108 kB' 'KernelStack: 6572 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 305360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.137 13:07:57 -- setup/common.sh@33 -- # echo 0 00:03:43.137 13:07:57 -- setup/common.sh@33 -- # return 0 00:03:43.137 13:07:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.137 13:07:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.137 13:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.137 13:07:57 -- setup/common.sh@18 -- # local node= 00:03:43.137 13:07:57 -- setup/common.sh@19 -- # local var val 00:03:43.137 13:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.137 13:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.137 13:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.137 13:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.137 13:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.137 13:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922880 kB' 'MemAvailable: 9479860 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 465568 kB' 'Inactive: 1422980 kB' 'Active(anon): 126360 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117500 kB' 'Mapped: 49900 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161544 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98104 kB' 'KernelStack: 6460 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 305360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.137 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 13:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.139 13:07:57 -- setup/common.sh@33 -- # echo 0 00:03:43.139 13:07:57 -- setup/common.sh@33 -- # return 0 00:03:43.139 13:07:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.139 13:07:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.139 13:07:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.139 13:07:57 -- setup/common.sh@18 -- # local node= 00:03:43.139 13:07:57 -- setup/common.sh@19 -- # local var val 00:03:43.139 13:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.139 13:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.139 13:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.139 13:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.139 13:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.139 13:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924180 kB' 'MemAvailable: 9481160 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 465288 kB' 'Inactive: 1422980 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117200 kB' 'Mapped: 49668 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161548 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98108 kB' 'KernelStack: 6416 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 305360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 13:07:57 -- setup/common.sh@33 -- # echo 0 00:03:43.140 13:07:57 -- setup/common.sh@33 -- # return 0 00:03:43.140 nr_hugepages=1024 00:03:43.140 resv_hugepages=0 00:03:43.140 13:07:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.140 13:07:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.140 13:07:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.140 surplus_hugepages=0 00:03:43.140 anon_hugepages=0 00:03:43.140 13:07:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.140 13:07:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.140 13:07:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.140 13:07:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.140 13:07:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.140 13:07:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.140 13:07:57 -- setup/common.sh@18 -- # local node= 00:03:43.140 13:07:57 -- setup/common.sh@19 -- # local var val 00:03:43.140 13:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.140 13:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.140 13:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.140 13:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.140 13:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.140 13:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924180 kB' 'MemAvailable: 9481160 kB' 'Buffers: 3704 kB' 'Cached: 1768976 kB' 'SwapCached: 0 kB' 'Active: 465548 kB' 'Inactive: 1422980 kB' 'Active(anon): 126340 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117460 kB' 'Mapped: 49668 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161548 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98108 kB' 'KernelStack: 6416 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 305360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.140 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 13:07:57 -- setup/common.sh@33 -- # echo 1024 00:03:43.142 13:07:57 -- setup/common.sh@33 -- # return 0 00:03:43.142 13:07:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.142 13:07:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.142 13:07:57 -- setup/hugepages.sh@27 -- # local node 00:03:43.142 13:07:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.142 13:07:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.142 13:07:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.142 13:07:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.142 13:07:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.142 13:07:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.142 13:07:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.142 13:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.142 13:07:57 -- setup/common.sh@18 -- # local node=0 00:03:43.142 13:07:57 -- setup/common.sh@19 -- # local var val 00:03:43.142 13:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.142 13:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.142 13:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.142 13:07:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.142 13:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.142 13:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924204 kB' 'MemUsed: 4312892 kB' 'SwapCached: 0 kB' 'Active: 465372 kB' 'Inactive: 1422980 kB' 'Active(anon): 126164 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1772680 kB' 'Mapped: 49668 kB' 'AnonPages: 117208 kB' 'Shmem: 10492 kB' 'KernelStack: 6368 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63440 kB' 'Slab: 161540 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 13:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # continue 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.143 13:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.143 13:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.143 13:07:57 -- setup/common.sh@33 -- # echo 0 00:03:43.143 13:07:57 -- setup/common.sh@33 -- # return 0 00:03:43.143 13:07:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.143 13:07:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.143 13:07:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.143 13:07:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.143 13:07:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.143 node0=1024 expecting 1024 00:03:43.143 13:07:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.143 00:03:43.143 real 0m1.128s 00:03:43.143 user 0m0.482s 00:03:43.143 sys 0m0.698s 00:03:43.143 13:07:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.143 ************************************ 00:03:43.143 END TEST no_shrink_alloc 00:03:43.143 ************************************ 00:03:43.143 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:43.143 13:07:57 -- setup/hugepages.sh@217 -- # clear_hp 00:03:43.143 13:07:57 -- setup/hugepages.sh@37 -- # local node hp 00:03:43.143 13:07:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.143 13:07:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.143 13:07:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.143 13:07:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.143 13:07:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.143 13:07:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.143 13:07:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.143 00:03:43.143 real 0m5.328s 00:03:43.143 user 0m2.136s 00:03:43.143 sys 0m3.029s 00:03:43.143 13:07:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.143 ************************************ 00:03:43.143 END TEST hugepages 00:03:43.143 ************************************ 00:03:43.143 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:43.143 13:07:57 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:43.143 13:07:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.143 13:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.143 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:43.143 ************************************ 00:03:43.143 START TEST driver 00:03:43.143 ************************************ 00:03:43.143 13:07:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:43.404 * Looking for test storage... 00:03:43.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:43.404 13:07:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:43.404 13:07:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:43.404 13:07:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:43.404 13:07:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:43.404 13:07:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:43.404 13:07:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:43.404 13:07:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:43.404 13:07:57 -- scripts/common.sh@335 -- # IFS=.-: 00:03:43.404 13:07:57 -- scripts/common.sh@335 -- # read -ra ver1 00:03:43.404 13:07:57 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.404 13:07:57 -- scripts/common.sh@336 -- # read -ra ver2 00:03:43.404 13:07:57 -- scripts/common.sh@337 -- # local 'op=<' 00:03:43.404 13:07:57 -- scripts/common.sh@339 -- # ver1_l=2 00:03:43.404 13:07:57 -- scripts/common.sh@340 -- # ver2_l=1 00:03:43.404 13:07:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:43.404 13:07:57 -- scripts/common.sh@343 -- # case "$op" in 00:03:43.404 13:07:57 -- scripts/common.sh@344 -- # : 1 00:03:43.404 13:07:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:43.404 13:07:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.404 13:07:57 -- scripts/common.sh@364 -- # decimal 1 00:03:43.404 13:07:57 -- scripts/common.sh@352 -- # local d=1 00:03:43.404 13:07:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.404 13:07:57 -- scripts/common.sh@354 -- # echo 1 00:03:43.404 13:07:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:43.404 13:07:57 -- scripts/common.sh@365 -- # decimal 2 00:03:43.404 13:07:57 -- scripts/common.sh@352 -- # local d=2 00:03:43.404 13:07:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.404 13:07:57 -- scripts/common.sh@354 -- # echo 2 00:03:43.404 13:07:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:43.405 13:07:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:43.405 13:07:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:43.405 13:07:57 -- scripts/common.sh@367 -- # return 0 00:03:43.405 13:07:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.405 13:07:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.405 --rc genhtml_branch_coverage=1 00:03:43.405 --rc genhtml_function_coverage=1 00:03:43.405 --rc genhtml_legend=1 00:03:43.405 --rc geninfo_all_blocks=1 00:03:43.405 --rc geninfo_unexecuted_blocks=1 00:03:43.405 00:03:43.405 ' 00:03:43.405 13:07:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.405 --rc genhtml_branch_coverage=1 00:03:43.405 --rc genhtml_function_coverage=1 00:03:43.405 --rc genhtml_legend=1 00:03:43.405 --rc geninfo_all_blocks=1 00:03:43.405 --rc geninfo_unexecuted_blocks=1 00:03:43.405 00:03:43.405 ' 00:03:43.405 13:07:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.405 --rc genhtml_branch_coverage=1 00:03:43.405 --rc genhtml_function_coverage=1 00:03:43.405 --rc genhtml_legend=1 00:03:43.405 --rc geninfo_all_blocks=1 00:03:43.405 --rc geninfo_unexecuted_blocks=1 00:03:43.405 00:03:43.405 ' 00:03:43.405 13:07:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.405 --rc genhtml_branch_coverage=1 00:03:43.405 --rc genhtml_function_coverage=1 00:03:43.405 --rc genhtml_legend=1 00:03:43.405 --rc geninfo_all_blocks=1 00:03:43.405 --rc geninfo_unexecuted_blocks=1 00:03:43.405 00:03:43.405 ' 00:03:43.405 13:07:57 -- setup/driver.sh@68 -- # setup reset 00:03:43.405 13:07:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.405 13:07:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.014 13:08:03 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:50.014 13:08:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.014 13:08:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.014 13:08:03 -- common/autotest_common.sh@10 -- # set +x 00:03:50.014 ************************************ 00:03:50.014 START TEST guess_driver 00:03:50.014 ************************************ 00:03:50.014 13:08:03 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:50.014 13:08:03 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:50.014 13:08:03 -- setup/driver.sh@47 -- # local fail=0 00:03:50.014 13:08:03 -- setup/driver.sh@49 -- # pick_driver 00:03:50.014 13:08:03 -- setup/driver.sh@36 -- # vfio 00:03:50.014 13:08:03 -- setup/driver.sh@21 -- # local iommu_grups 00:03:50.014 13:08:03 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:50.014 13:08:03 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:50.014 13:08:03 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:50.014 13:08:03 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:50.014 13:08:03 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:50.014 13:08:03 -- setup/driver.sh@32 -- # return 1 00:03:50.014 13:08:03 -- setup/driver.sh@38 -- # uio 00:03:50.014 13:08:03 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:50.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:50.014 13:08:03 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:50.014 13:08:03 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:50.014 Looking for driver=uio_pci_generic 00:03:50.014 13:08:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.014 13:08:03 -- setup/driver.sh@45 -- # setup output config 00:03:50.014 13:08:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.014 13:08:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:50.014 13:08:04 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:50.014 13:08:04 -- setup/driver.sh@58 -- # continue 00:03:50.014 13:08:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.275 13:08:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.275 13:08:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:50.275 13:08:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.275 13:08:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.275 13:08:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:50.275 13:08:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.275 13:08:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.275 13:08:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:50.275 13:08:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.275 13:08:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.275 13:08:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:50.275 13:08:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.275 13:08:04 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:50.275 13:08:04 -- setup/driver.sh@65 -- # setup reset 00:03:50.275 13:08:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.275 13:08:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.863 00:03:56.863 real 0m6.867s 00:03:56.863 user 0m0.637s 00:03:56.863 sys 0m1.197s 00:03:56.863 ************************************ 00:03:56.863 13:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.863 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:03:56.863 END TEST guess_driver 00:03:56.863 ************************************ 00:03:56.863 ************************************ 00:03:56.863 END TEST driver 00:03:56.863 ************************************ 00:03:56.863 00:03:56.863 real 0m12.989s 00:03:56.863 user 0m1.010s 00:03:56.863 sys 0m1.995s 00:03:56.863 13:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.863 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:03:56.863 13:08:10 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:56.863 13:08:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.863 13:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.863 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:03:56.863 ************************************ 00:03:56.863 START TEST devices 00:03:56.863 ************************************ 00:03:56.863 13:08:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:56.863 * Looking for test storage... 00:03:56.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.863 13:08:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:56.863 13:08:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:56.863 13:08:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.863 13:08:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.863 13:08:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.863 13:08:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.863 13:08:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.863 13:08:10 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.863 13:08:10 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.863 13:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.863 13:08:10 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.863 13:08:10 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.863 13:08:10 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.863 13:08:10 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.863 13:08:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.863 13:08:10 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.863 13:08:10 -- scripts/common.sh@344 -- # : 1 00:03:56.863 13:08:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.863 13:08:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.863 13:08:10 -- scripts/common.sh@364 -- # decimal 1 00:03:56.863 13:08:10 -- scripts/common.sh@352 -- # local d=1 00:03:56.863 13:08:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.863 13:08:10 -- scripts/common.sh@354 -- # echo 1 00:03:56.863 13:08:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.863 13:08:10 -- scripts/common.sh@365 -- # decimal 2 00:03:56.863 13:08:10 -- scripts/common.sh@352 -- # local d=2 00:03:56.863 13:08:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.863 13:08:10 -- scripts/common.sh@354 -- # echo 2 00:03:56.863 13:08:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.863 13:08:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.863 13:08:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.863 13:08:10 -- scripts/common.sh@367 -- # return 0 00:03:56.863 13:08:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.863 13:08:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.863 --rc genhtml_branch_coverage=1 00:03:56.863 --rc genhtml_function_coverage=1 00:03:56.863 --rc genhtml_legend=1 00:03:56.863 --rc geninfo_all_blocks=1 00:03:56.863 --rc geninfo_unexecuted_blocks=1 00:03:56.863 00:03:56.863 ' 00:03:56.863 13:08:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.863 --rc genhtml_branch_coverage=1 00:03:56.863 --rc genhtml_function_coverage=1 00:03:56.863 --rc genhtml_legend=1 00:03:56.863 --rc geninfo_all_blocks=1 00:03:56.863 --rc geninfo_unexecuted_blocks=1 00:03:56.863 00:03:56.863 ' 00:03:56.863 13:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.863 --rc genhtml_branch_coverage=1 00:03:56.863 --rc genhtml_function_coverage=1 00:03:56.863 --rc genhtml_legend=1 00:03:56.863 --rc geninfo_all_blocks=1 00:03:56.863 --rc geninfo_unexecuted_blocks=1 00:03:56.863 00:03:56.863 ' 00:03:56.863 13:08:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.863 --rc genhtml_branch_coverage=1 00:03:56.863 --rc genhtml_function_coverage=1 00:03:56.863 --rc genhtml_legend=1 00:03:56.863 --rc geninfo_all_blocks=1 00:03:56.863 --rc geninfo_unexecuted_blocks=1 00:03:56.863 00:03:56.863 ' 00:03:56.863 13:08:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:56.863 13:08:10 -- setup/devices.sh@192 -- # setup reset 00:03:56.863 13:08:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.863 13:08:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.436 13:08:11 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:57.436 13:08:11 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:57.436 13:08:11 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:57.436 13:08:11 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:57.436 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:57.436 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.436 13:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:57.436 13:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:57.436 13:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:57.437 13:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:57.437 13:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:57.437 13:08:11 -- setup/devices.sh@196 -- # blocks=() 00:03:57.437 13:08:11 -- setup/devices.sh@196 -- # declare -a blocks 00:03:57.437 13:08:11 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:57.437 13:08:11 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:57.437 13:08:11 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:57.437 13:08:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.437 13:08:11 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:57.437 13:08:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:57.437 13:08:11 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:03:57.437 13:08:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:57.437 13:08:11 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:57.437 13:08:11 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:57.437 13:08:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:57.697 No valid GPT data, bailing 00:03:57.697 13:08:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:57.697 13:08:12 -- scripts/common.sh@393 -- # pt= 00:03:57.697 13:08:12 -- scripts/common.sh@394 -- # return 1 00:03:57.697 13:08:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:57.697 13:08:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:57.698 13:08:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:57.698 13:08:12 -- setup/common.sh@80 -- # echo 1073741824 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:03:57.698 13:08:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.698 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:57.698 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:57.698 13:08:12 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:57.698 13:08:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:57.698 13:08:12 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:57.698 13:08:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:57.698 No valid GPT data, bailing 00:03:57.698 13:08:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:57.698 13:08:12 -- scripts/common.sh@393 -- # pt= 00:03:57.698 13:08:12 -- scripts/common.sh@394 -- # return 1 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:57.698 13:08:12 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:57.698 13:08:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:57.698 13:08:12 -- setup/common.sh@80 -- # echo 4294967296 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:57.698 13:08:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.698 13:08:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:57.698 13:08:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.698 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:57.698 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:57.698 13:08:12 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:57.698 13:08:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:57.698 13:08:12 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:57.698 13:08:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:57.698 No valid GPT data, bailing 00:03:57.698 13:08:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:57.698 13:08:12 -- scripts/common.sh@393 -- # pt= 00:03:57.698 13:08:12 -- scripts/common.sh@394 -- # return 1 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:57.698 13:08:12 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:57.698 13:08:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:57.698 13:08:12 -- setup/common.sh@80 -- # echo 4294967296 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:57.698 13:08:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.698 13:08:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:57.698 13:08:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.698 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:57.698 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:57.698 13:08:12 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:57.698 13:08:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:57.698 13:08:12 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:57.698 13:08:12 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:57.698 13:08:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:57.698 No valid GPT data, bailing 00:03:57.698 13:08:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:57.959 13:08:12 -- scripts/common.sh@393 -- # pt= 00:03:57.959 13:08:12 -- scripts/common.sh@394 -- # return 1 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:57.959 13:08:12 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:57.959 13:08:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:57.959 13:08:12 -- setup/common.sh@80 -- # echo 4294967296 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:57.959 13:08:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.959 13:08:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:57.959 13:08:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.959 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:57.959 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:57.959 13:08:12 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:57.959 13:08:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:57.959 13:08:12 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:03:57.959 13:08:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:03:57.959 No valid GPT data, bailing 00:03:57.959 13:08:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:57.959 13:08:12 -- scripts/common.sh@393 -- # pt= 00:03:57.959 13:08:12 -- scripts/common.sh@394 -- # return 1 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:57.959 13:08:12 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:57.959 13:08:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:57.959 13:08:12 -- setup/common.sh@80 -- # echo 6343335936 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:03:57.959 13:08:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.959 13:08:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:57.959 13:08:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.959 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:03:57.959 13:08:12 -- setup/devices.sh@201 -- # ctrl=nvme3 00:03:57.959 13:08:12 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:57.959 13:08:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:03:57.959 13:08:12 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:03:57.959 13:08:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:03:57.959 No valid GPT data, bailing 00:03:57.959 13:08:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:57.959 13:08:12 -- scripts/common.sh@393 -- # pt= 00:03:57.959 13:08:12 -- scripts/common.sh@394 -- # return 1 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:03:57.959 13:08:12 -- setup/common.sh@76 -- # local dev=nvme3n1 00:03:57.959 13:08:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:03:57.959 13:08:12 -- setup/common.sh@80 -- # echo 5368709120 00:03:57.959 13:08:12 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:57.959 13:08:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.959 13:08:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:57.959 13:08:12 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:03:57.959 13:08:12 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:03:57.959 13:08:12 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:57.959 13:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.959 13:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.959 13:08:12 -- common/autotest_common.sh@10 -- # set +x 00:03:57.959 ************************************ 00:03:57.959 START TEST nvme_mount 00:03:57.959 ************************************ 00:03:57.959 13:08:12 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:57.959 13:08:12 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:03:57.959 13:08:12 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:03:57.959 13:08:12 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.959 13:08:12 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:57.959 13:08:12 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:03:57.960 13:08:12 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:57.960 13:08:12 -- setup/common.sh@40 -- # local part_no=1 00:03:57.960 13:08:12 -- setup/common.sh@41 -- # local size=1073741824 00:03:57.960 13:08:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.960 13:08:12 -- setup/common.sh@44 -- # parts=() 00:03:57.960 13:08:12 -- setup/common.sh@44 -- # local parts 00:03:57.960 13:08:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.960 13:08:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.960 13:08:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.960 13:08:12 -- setup/common.sh@46 -- # (( part++ )) 00:03:57.960 13:08:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.960 13:08:12 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:57.960 13:08:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:57.960 13:08:12 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:03:59.349 Creating new GPT entries in memory. 00:03:59.349 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:59.349 other utilities. 00:03:59.349 13:08:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:59.349 13:08:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.349 13:08:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.349 13:08:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.349 13:08:13 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:00.298 Creating new GPT entries in memory. 00:04:00.298 The operation has completed successfully. 00:04:00.298 13:08:14 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.298 13:08:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.298 13:08:14 -- setup/common.sh@62 -- # wait 53724 00:04:00.298 13:08:14 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.298 13:08:14 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:00.298 13:08:14 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.298 13:08:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:00.298 13:08:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:00.298 13:08:14 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.298 13:08:14 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:00.298 13:08:14 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:00.298 13:08:14 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:00.298 13:08:14 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.298 13:08:14 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:00.298 13:08:14 -- setup/devices.sh@53 -- # local found=0 00:04:00.298 13:08:14 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.298 13:08:14 -- setup/devices.sh@56 -- # : 00:04:00.298 13:08:14 -- setup/devices.sh@59 -- # local pci status 00:04:00.298 13:08:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.298 13:08:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:00.298 13:08:14 -- setup/devices.sh@47 -- # setup output config 00:04:00.298 13:08:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.298 13:08:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.298 13:08:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.298 13:08:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.562 13:08:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.562 13:08:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.562 13:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.562 13:08:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:00.562 13:08:15 -- setup/devices.sh@63 -- # found=1 00:04:00.562 13:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.562 13:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.562 13:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.822 13:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.822 13:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.822 13:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.822 13:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.083 13:08:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.083 13:08:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:01.083 13:08:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.083 13:08:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.083 13:08:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.083 13:08:15 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.083 13:08:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.083 13:08:15 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.083 13:08:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:01.083 13:08:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:01.083 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.083 13:08:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:01.083 13:08:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:01.344 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.344 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.344 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.344 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:01.344 13:08:15 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:01.344 13:08:15 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:01.344 13:08:15 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.344 13:08:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:01.344 13:08:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:01.344 13:08:15 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.344 13:08:15 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.345 13:08:15 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:01.345 13:08:15 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:01.345 13:08:15 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.345 13:08:15 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.345 13:08:15 -- setup/devices.sh@53 -- # local found=0 00:04:01.345 13:08:15 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.345 13:08:15 -- setup/devices.sh@56 -- # : 00:04:01.345 13:08:15 -- setup/devices.sh@59 -- # local pci status 00:04:01.345 13:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 13:08:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:01.345 13:08:15 -- setup/devices.sh@47 -- # setup output config 00:04:01.345 13:08:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.345 13:08:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.609 13:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.609 13:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.609 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.609 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.870 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.870 13:08:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:01.870 13:08:16 -- setup/devices.sh@63 -- # found=1 00:04:01.870 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.870 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.870 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.132 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.132 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.132 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.132 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.132 13:08:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.132 13:08:16 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:02.132 13:08:16 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.132 13:08:16 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.132 13:08:16 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.132 13:08:16 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.132 13:08:16 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:02.132 13:08:16 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:02.132 13:08:16 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:02.132 13:08:16 -- setup/devices.sh@50 -- # local mount_point= 00:04:02.132 13:08:16 -- setup/devices.sh@51 -- # local test_file= 00:04:02.132 13:08:16 -- setup/devices.sh@53 -- # local found=0 00:04:02.132 13:08:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.132 13:08:16 -- setup/devices.sh@59 -- # local pci status 00:04:02.132 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.132 13:08:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:02.132 13:08:16 -- setup/devices.sh@47 -- # setup output config 00:04:02.132 13:08:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.132 13:08:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.393 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.393 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.393 13:08:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.393 13:08:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.654 13:08:17 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.654 13:08:17 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:02.654 13:08:17 -- setup/devices.sh@63 -- # found=1 00:04:02.654 13:08:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.654 13:08:17 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.654 13:08:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.916 13:08:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.916 13:08:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.916 13:08:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.916 13:08:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.176 13:08:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.176 13:08:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.176 13:08:17 -- setup/devices.sh@68 -- # return 0 00:04:03.176 13:08:17 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.176 13:08:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.176 13:08:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:03.176 13:08:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:03.176 13:08:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:03.176 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.176 00:04:03.176 real 0m5.121s 00:04:03.176 user 0m0.970s 00:04:03.176 sys 0m1.384s 00:04:03.176 13:08:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.176 ************************************ 00:04:03.176 END TEST nvme_mount 00:04:03.176 ************************************ 00:04:03.176 13:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.176 13:08:17 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:03.176 13:08:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.176 13:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.176 13:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.176 ************************************ 00:04:03.176 START TEST dm_mount 00:04:03.176 ************************************ 00:04:03.176 13:08:17 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:03.176 13:08:17 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:03.176 13:08:17 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:03.176 13:08:17 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:03.176 13:08:17 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:03.176 13:08:17 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:03.176 13:08:17 -- setup/common.sh@40 -- # local part_no=2 00:04:03.176 13:08:17 -- setup/common.sh@41 -- # local size=1073741824 00:04:03.176 13:08:17 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.176 13:08:17 -- setup/common.sh@44 -- # parts=() 00:04:03.176 13:08:17 -- setup/common.sh@44 -- # local parts 00:04:03.176 13:08:17 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.176 13:08:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.176 13:08:17 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.176 13:08:17 -- setup/common.sh@46 -- # (( part++ )) 00:04:03.176 13:08:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.176 13:08:17 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.176 13:08:17 -- setup/common.sh@46 -- # (( part++ )) 00:04:03.176 13:08:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.176 13:08:17 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:03.176 13:08:17 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:03.176 13:08:17 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:04.122 Creating new GPT entries in memory. 00:04:04.122 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.122 other utilities. 00:04:04.122 13:08:18 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.122 13:08:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.122 13:08:18 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.122 13:08:18 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.122 13:08:18 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:05.508 Creating new GPT entries in memory. 00:04:05.508 The operation has completed successfully. 00:04:05.508 13:08:19 -- setup/common.sh@57 -- # (( part++ )) 00:04:05.508 13:08:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.508 13:08:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.508 13:08:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.508 13:08:19 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:06.453 The operation has completed successfully. 00:04:06.453 13:08:20 -- setup/common.sh@57 -- # (( part++ )) 00:04:06.453 13:08:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.453 13:08:20 -- setup/common.sh@62 -- # wait 54352 00:04:06.453 13:08:20 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.453 13:08:20 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.453 13:08:20 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.453 13:08:20 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.453 13:08:20 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.453 13:08:20 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.453 13:08:20 -- setup/devices.sh@161 -- # break 00:04:06.453 13:08:20 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.453 13:08:20 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.453 13:08:20 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:06.453 13:08:20 -- setup/devices.sh@166 -- # dm=dm-0 00:04:06.453 13:08:20 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:06.453 13:08:20 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:06.453 13:08:20 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.453 13:08:20 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:06.453 13:08:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.453 13:08:20 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.453 13:08:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.453 13:08:20 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.453 13:08:20 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.453 13:08:20 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:06.453 13:08:20 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:06.453 13:08:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.453 13:08:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.453 13:08:20 -- setup/devices.sh@53 -- # local found=0 00:04:06.453 13:08:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.453 13:08:20 -- setup/devices.sh@56 -- # : 00:04:06.453 13:08:20 -- setup/devices.sh@59 -- # local pci status 00:04:06.453 13:08:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:06.453 13:08:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.453 13:08:20 -- setup/devices.sh@47 -- # setup output config 00:04:06.453 13:08:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.453 13:08:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.714 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.714 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.714 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.714 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.973 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.973 13:08:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.973 13:08:21 -- setup/devices.sh@63 -- # found=1 00:04:06.973 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.973 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.973 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.236 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.236 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.236 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.236 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.236 13:08:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.236 13:08:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:07.236 13:08:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.236 13:08:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.236 13:08:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:07.236 13:08:21 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.236 13:08:21 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:07.236 13:08:21 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:07.236 13:08:21 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:07.236 13:08:21 -- setup/devices.sh@50 -- # local mount_point= 00:04:07.236 13:08:21 -- setup/devices.sh@51 -- # local test_file= 00:04:07.236 13:08:21 -- setup/devices.sh@53 -- # local found=0 00:04:07.236 13:08:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.236 13:08:21 -- setup/devices.sh@59 -- # local pci status 00:04:07.236 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.236 13:08:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:07.236 13:08:21 -- setup/devices.sh@47 -- # setup output config 00:04:07.236 13:08:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.236 13:08:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.236 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.236 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.494 13:08:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.494 13:08:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.755 13:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.755 13:08:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:07.755 13:08:22 -- setup/devices.sh@63 -- # found=1 00:04:07.755 13:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.755 13:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.755 13:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.755 13:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.755 13:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.755 13:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.755 13:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.014 13:08:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.014 13:08:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.014 13:08:22 -- setup/devices.sh@68 -- # return 0 00:04:08.014 13:08:22 -- setup/devices.sh@187 -- # cleanup_dm 00:04:08.014 13:08:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:08.014 13:08:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.014 13:08:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:08.014 13:08:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:08.014 13:08:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:08.014 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.014 13:08:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:08.014 13:08:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:08.014 00:04:08.014 real 0m4.766s 00:04:08.014 user 0m0.589s 00:04:08.014 sys 0m0.898s 00:04:08.014 13:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.014 13:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:08.014 ************************************ 00:04:08.014 END TEST dm_mount 00:04:08.014 ************************************ 00:04:08.014 13:08:22 -- setup/devices.sh@1 -- # cleanup 00:04:08.014 13:08:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:08.014 13:08:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.014 13:08:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:08.014 13:08:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:08.014 13:08:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:08.014 13:08:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:08.273 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.273 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.273 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.273 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:08.273 13:08:22 -- setup/devices.sh@12 -- # cleanup_dm 00:04:08.273 13:08:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:08.273 13:08:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.273 13:08:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:08.273 13:08:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:08.273 13:08:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:08.273 13:08:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:08.273 00:04:08.273 real 0m11.967s 00:04:08.273 user 0m2.354s 00:04:08.273 sys 0m2.999s 00:04:08.273 13:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.273 13:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:08.273 ************************************ 00:04:08.273 END TEST devices 00:04:08.273 ************************************ 00:04:08.273 00:04:08.273 real 0m41.776s 00:04:08.273 user 0m7.857s 00:04:08.273 sys 0m11.485s 00:04:08.273 13:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.273 13:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:08.273 ************************************ 00:04:08.273 END TEST setup.sh 00:04:08.273 ************************************ 00:04:08.273 13:08:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.531 Hugepages 00:04:08.531 node hugesize free / total 00:04:08.531 node0 1048576kB 0 / 0 00:04:08.531 node0 2048kB 2048 / 2048 00:04:08.531 00:04:08.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.531 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.531 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:08.531 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:08.789 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:08.789 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:08.789 13:08:23 -- spdk/autotest.sh@128 -- # uname -s 00:04:08.789 13:08:23 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:08.789 13:08:23 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:08.789 13:08:23 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.730 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.730 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.730 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.730 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.730 13:08:24 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:10.678 13:08:25 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:10.678 13:08:25 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:10.678 13:08:25 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.678 13:08:25 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:10.678 13:08:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:10.678 13:08:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:10.678 13:08:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.678 13:08:25 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.678 13:08:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:10.943 13:08:25 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:10.943 13:08:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:10.943 13:08:25 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.465 Waiting for block devices as requested 00:04:11.465 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.465 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.465 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.465 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.730 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:16.730 13:08:31 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:16.730 13:08:31 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:16.730 13:08:31 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.730 13:08:31 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:16.730 13:08:31 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:16.730 13:08:31 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:16.730 13:08:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:16.730 13:08:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:16.730 13:08:31 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:16.730 13:08:31 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:16.730 13:08:31 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:16.730 13:08:31 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:16.730 13:08:31 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:16.730 13:08:31 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:16.730 13:08:31 -- common/autotest_common.sh@1552 -- # continue 00:04:16.730 13:08:31 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:16.730 13:08:31 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:16.730 13:08:31 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:16.730 13:08:31 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.730 13:08:31 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:16.730 13:08:31 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:16.730 13:08:31 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:16.730 13:08:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:16.730 13:08:31 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:16.731 13:08:31 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:16.731 13:08:31 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:16.731 13:08:31 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:16.731 13:08:31 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1552 -- # continue 00:04:16.731 13:08:31 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:16.731 13:08:31 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.731 13:08:31 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:16.731 13:08:31 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:16.731 13:08:31 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:16.731 13:08:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:16.731 13:08:31 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:16.731 13:08:31 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:16.731 13:08:31 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:16.731 13:08:31 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:16.731 13:08:31 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1552 -- # continue 00:04:16.731 13:08:31 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:16.731 13:08:31 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.731 13:08:31 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:16.731 13:08:31 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:16.731 13:08:31 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:16.731 13:08:31 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:16.731 13:08:31 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:16.731 13:08:31 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:16.731 13:08:31 -- common/autotest_common.sh@1552 -- # continue 00:04:16.731 13:08:31 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:16.731 13:08:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:16.731 13:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:16.731 13:08:31 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:16.731 13:08:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.731 13:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:16.731 13:08:31 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.665 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.665 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.665 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.665 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.923 13:08:32 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:17.923 13:08:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.923 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.923 13:08:32 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:17.923 13:08:32 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:17.923 13:08:32 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:17.923 13:08:32 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:17.923 13:08:32 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:17.923 13:08:32 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:17.923 13:08:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:17.923 13:08:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:17.923 13:08:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.923 13:08:32 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:17.923 13:08:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:17.923 13:08:32 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:17.923 13:08:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:17.923 13:08:32 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:17.923 13:08:32 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:17.923 13:08:32 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:17.923 13:08:32 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.924 13:08:32 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:17.924 13:08:32 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:17.924 13:08:32 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:17.924 13:08:32 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.924 13:08:32 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:17.924 13:08:32 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:17.924 13:08:32 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:17.924 13:08:32 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.924 13:08:32 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:17.924 13:08:32 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:17.924 13:08:32 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:17.924 13:08:32 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:17.924 13:08:32 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:17.924 13:08:32 -- common/autotest_common.sh@1588 -- # return 0 00:04:17.924 13:08:32 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:17.924 13:08:32 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:17.924 13:08:32 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:17.924 13:08:32 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:17.924 13:08:32 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:17.924 13:08:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.924 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.924 13:08:32 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:17.924 13:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.924 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.924 ************************************ 00:04:17.924 START TEST env 00:04:17.924 ************************************ 00:04:17.924 13:08:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:17.924 * Looking for test storage... 00:04:17.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:17.924 13:08:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:17.924 13:08:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:17.924 13:08:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:17.924 13:08:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:17.924 13:08:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:17.924 13:08:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:17.924 13:08:32 -- scripts/common.sh@335 -- # IFS=.-: 00:04:17.924 13:08:32 -- scripts/common.sh@335 -- # read -ra ver1 00:04:17.924 13:08:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.924 13:08:32 -- scripts/common.sh@336 -- # read -ra ver2 00:04:17.924 13:08:32 -- scripts/common.sh@337 -- # local 'op=<' 00:04:17.924 13:08:32 -- scripts/common.sh@339 -- # ver1_l=2 00:04:17.924 13:08:32 -- scripts/common.sh@340 -- # ver2_l=1 00:04:17.924 13:08:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:17.924 13:08:32 -- scripts/common.sh@343 -- # case "$op" in 00:04:17.924 13:08:32 -- scripts/common.sh@344 -- # : 1 00:04:17.924 13:08:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:17.924 13:08:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.924 13:08:32 -- scripts/common.sh@364 -- # decimal 1 00:04:17.924 13:08:32 -- scripts/common.sh@352 -- # local d=1 00:04:17.924 13:08:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.924 13:08:32 -- scripts/common.sh@354 -- # echo 1 00:04:17.924 13:08:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:17.924 13:08:32 -- scripts/common.sh@365 -- # decimal 2 00:04:17.924 13:08:32 -- scripts/common.sh@352 -- # local d=2 00:04:17.924 13:08:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.924 13:08:32 -- scripts/common.sh@354 -- # echo 2 00:04:17.924 13:08:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:17.924 13:08:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:17.924 13:08:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:17.924 13:08:32 -- scripts/common.sh@367 -- # return 0 00:04:17.924 13:08:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.924 --rc genhtml_branch_coverage=1 00:04:17.924 --rc genhtml_function_coverage=1 00:04:17.924 --rc genhtml_legend=1 00:04:17.924 --rc geninfo_all_blocks=1 00:04:17.924 --rc geninfo_unexecuted_blocks=1 00:04:17.924 00:04:17.924 ' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.924 --rc genhtml_branch_coverage=1 00:04:17.924 --rc genhtml_function_coverage=1 00:04:17.924 --rc genhtml_legend=1 00:04:17.924 --rc geninfo_all_blocks=1 00:04:17.924 --rc geninfo_unexecuted_blocks=1 00:04:17.924 00:04:17.924 ' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.924 --rc genhtml_branch_coverage=1 00:04:17.924 --rc genhtml_function_coverage=1 00:04:17.924 --rc genhtml_legend=1 00:04:17.924 --rc geninfo_all_blocks=1 00:04:17.924 --rc geninfo_unexecuted_blocks=1 00:04:17.924 00:04:17.924 ' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.924 --rc genhtml_branch_coverage=1 00:04:17.924 --rc genhtml_function_coverage=1 00:04:17.924 --rc genhtml_legend=1 00:04:17.924 --rc geninfo_all_blocks=1 00:04:17.924 --rc geninfo_unexecuted_blocks=1 00:04:17.924 00:04:17.924 ' 00:04:17.924 13:08:32 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:17.924 13:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.924 13:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.924 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.182 ************************************ 00:04:18.182 START TEST env_memory 00:04:18.182 ************************************ 00:04:18.182 13:08:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.182 00:04:18.182 00:04:18.182 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.182 http://cunit.sourceforge.net/ 00:04:18.182 00:04:18.182 00:04:18.182 Suite: memory 00:04:18.182 Test: alloc and free memory map ...[2024-12-16 13:08:32.552953] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.182 passed 00:04:18.182 Test: mem map translation ...[2024-12-16 13:08:32.591601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.182 [2024-12-16 13:08:32.591642] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.182 [2024-12-16 13:08:32.591716] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.182 [2024-12-16 13:08:32.591730] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.183 passed 00:04:18.183 Test: mem map registration ...[2024-12-16 13:08:32.659744] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:18.183 [2024-12-16 13:08:32.659773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:18.183 passed 00:04:18.183 Test: mem map adjacent registrations ...passed 00:04:18.183 00:04:18.183 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.183 suites 1 1 n/a 0 0 00:04:18.183 tests 4 4 4 0 0 00:04:18.183 asserts 152 152 152 0 n/a 00:04:18.183 00:04:18.183 Elapsed time = 0.233 seconds 00:04:18.441 00:04:18.441 real 0m0.267s 00:04:18.441 user 0m0.241s 00:04:18.441 sys 0m0.017s 00:04:18.441 13:08:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.441 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.441 ************************************ 00:04:18.441 END TEST env_memory 00:04:18.441 ************************************ 00:04:18.441 13:08:32 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:18.441 13:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.441 13:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.441 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.441 ************************************ 00:04:18.441 START TEST env_vtophys 00:04:18.441 ************************************ 00:04:18.441 13:08:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:18.441 EAL: lib.eal log level changed from notice to debug 00:04:18.441 EAL: Detected lcore 0 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 1 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 2 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 3 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 4 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 5 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 6 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 7 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 8 as core 0 on socket 0 00:04:18.441 EAL: Detected lcore 9 as core 0 on socket 0 00:04:18.441 EAL: Maximum logical cores by configuration: 128 00:04:18.441 EAL: Detected CPU lcores: 10 00:04:18.441 EAL: Detected NUMA nodes: 1 00:04:18.441 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:18.441 EAL: Detected shared linkage of DPDK 00:04:18.441 EAL: No shared files mode enabled, IPC will be disabled 00:04:18.441 EAL: Selected IOVA mode 'PA' 00:04:18.441 EAL: Probing VFIO support... 00:04:18.441 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:18.441 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:18.441 EAL: Ask a virtual area of 0x2e000 bytes 00:04:18.441 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:18.441 EAL: Setting up physically contiguous memory... 00:04:18.441 EAL: Setting maximum number of open files to 524288 00:04:18.441 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:18.441 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:18.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.441 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:18.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.441 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:18.441 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:18.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.441 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:18.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.441 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:18.441 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:18.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.441 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:18.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.441 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:18.441 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:18.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.441 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:18.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.442 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.442 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:18.442 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:18.442 EAL: Hugepages will be freed exactly as allocated. 00:04:18.442 EAL: No shared files mode enabled, IPC is disabled 00:04:18.442 EAL: No shared files mode enabled, IPC is disabled 00:04:18.442 EAL: TSC frequency is ~2600000 KHz 00:04:18.442 EAL: Main lcore 0 is ready (tid=7f47920f8a40;cpuset=[0]) 00:04:18.442 EAL: Trying to obtain current memory policy. 00:04:18.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.442 EAL: Restoring previous memory policy: 0 00:04:18.442 EAL: request: mp_malloc_sync 00:04:18.442 EAL: No shared files mode enabled, IPC is disabled 00:04:18.442 EAL: Heap on socket 0 was expanded by 2MB 00:04:18.442 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:18.442 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:18.442 EAL: Mem event callback 'spdk:(nil)' registered 00:04:18.442 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:18.442 00:04:18.442 00:04:18.442 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.442 http://cunit.sourceforge.net/ 00:04:18.442 00:04:18.442 00:04:18.442 Suite: components_suite 00:04:18.700 Test: vtophys_malloc_test ...passed 00:04:18.700 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:18.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.700 EAL: Restoring previous memory policy: 4 00:04:18.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.700 EAL: request: mp_malloc_sync 00:04:18.700 EAL: No shared files mode enabled, IPC is disabled 00:04:18.700 EAL: Heap on socket 0 was expanded by 4MB 00:04:18.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.700 EAL: request: mp_malloc_sync 00:04:18.700 EAL: No shared files mode enabled, IPC is disabled 00:04:18.700 EAL: Heap on socket 0 was shrunk by 4MB 00:04:18.958 EAL: Trying to obtain current memory policy. 00:04:18.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.958 EAL: Restoring previous memory policy: 4 00:04:18.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.958 EAL: request: mp_malloc_sync 00:04:18.958 EAL: No shared files mode enabled, IPC is disabled 00:04:18.958 EAL: Heap on socket 0 was expanded by 6MB 00:04:18.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.958 EAL: request: mp_malloc_sync 00:04:18.958 EAL: No shared files mode enabled, IPC is disabled 00:04:18.958 EAL: Heap on socket 0 was shrunk by 6MB 00:04:18.958 EAL: Trying to obtain current memory policy. 00:04:18.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.958 EAL: Restoring previous memory policy: 4 00:04:18.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.958 EAL: request: mp_malloc_sync 00:04:18.958 EAL: No shared files mode enabled, IPC is disabled 00:04:18.958 EAL: Heap on socket 0 was expanded by 10MB 00:04:18.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.958 EAL: request: mp_malloc_sync 00:04:18.958 EAL: No shared files mode enabled, IPC is disabled 00:04:18.958 EAL: Heap on socket 0 was shrunk by 10MB 00:04:18.958 EAL: Trying to obtain current memory policy. 00:04:18.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.958 EAL: Restoring previous memory policy: 4 00:04:18.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.958 EAL: request: mp_malloc_sync 00:04:18.958 EAL: No shared files mode enabled, IPC is disabled 00:04:18.959 EAL: Heap on socket 0 was expanded by 18MB 00:04:18.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.959 EAL: request: mp_malloc_sync 00:04:18.959 EAL: No shared files mode enabled, IPC is disabled 00:04:18.959 EAL: Heap on socket 0 was shrunk by 18MB 00:04:18.959 EAL: Trying to obtain current memory policy. 00:04:18.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.959 EAL: Restoring previous memory policy: 4 00:04:18.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.959 EAL: request: mp_malloc_sync 00:04:18.959 EAL: No shared files mode enabled, IPC is disabled 00:04:18.959 EAL: Heap on socket 0 was expanded by 34MB 00:04:18.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.959 EAL: request: mp_malloc_sync 00:04:18.959 EAL: No shared files mode enabled, IPC is disabled 00:04:18.959 EAL: Heap on socket 0 was shrunk by 34MB 00:04:18.959 EAL: Trying to obtain current memory policy. 00:04:18.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.959 EAL: Restoring previous memory policy: 4 00:04:18.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.959 EAL: request: mp_malloc_sync 00:04:18.959 EAL: No shared files mode enabled, IPC is disabled 00:04:18.959 EAL: Heap on socket 0 was expanded by 66MB 00:04:18.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.959 EAL: request: mp_malloc_sync 00:04:18.959 EAL: No shared files mode enabled, IPC is disabled 00:04:18.959 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.217 EAL: Trying to obtain current memory policy. 00:04:19.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.217 EAL: Restoring previous memory policy: 4 00:04:19.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.217 EAL: request: mp_malloc_sync 00:04:19.217 EAL: No shared files mode enabled, IPC is disabled 00:04:19.217 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.217 EAL: request: mp_malloc_sync 00:04:19.217 EAL: No shared files mode enabled, IPC is disabled 00:04:19.217 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.476 EAL: Trying to obtain current memory policy. 00:04:19.476 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.476 EAL: Restoring previous memory policy: 4 00:04:19.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.476 EAL: request: mp_malloc_sync 00:04:19.476 EAL: No shared files mode enabled, IPC is disabled 00:04:19.476 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.734 EAL: request: mp_malloc_sync 00:04:19.734 EAL: No shared files mode enabled, IPC is disabled 00:04:19.734 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.993 EAL: Trying to obtain current memory policy. 00:04:19.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.993 EAL: Restoring previous memory policy: 4 00:04:19.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.993 EAL: request: mp_malloc_sync 00:04:19.993 EAL: No shared files mode enabled, IPC is disabled 00:04:19.993 EAL: Heap on socket 0 was expanded by 514MB 00:04:20.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.817 EAL: request: mp_malloc_sync 00:04:20.817 EAL: No shared files mode enabled, IPC is disabled 00:04:20.817 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.074 EAL: Trying to obtain current memory policy. 00:04:21.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.333 EAL: Restoring previous memory policy: 4 00:04:21.333 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.333 EAL: request: mp_malloc_sync 00:04:21.333 EAL: No shared files mode enabled, IPC is disabled 00:04:21.333 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.267 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.267 EAL: request: mp_malloc_sync 00:04:22.267 EAL: No shared files mode enabled, IPC is disabled 00:04:22.267 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.902 passed 00:04:22.902 00:04:22.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.902 suites 1 1 n/a 0 0 00:04:22.902 tests 2 2 2 0 0 00:04:22.902 asserts 5446 5446 5446 0 n/a 00:04:22.902 00:04:22.902 Elapsed time = 4.379 seconds 00:04:22.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.902 EAL: request: mp_malloc_sync 00:04:22.902 EAL: No shared files mode enabled, IPC is disabled 00:04:22.902 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.902 EAL: No shared files mode enabled, IPC is disabled 00:04:22.902 EAL: No shared files mode enabled, IPC is disabled 00:04:22.902 EAL: No shared files mode enabled, IPC is disabled 00:04:22.902 00:04:22.902 real 0m4.616s 00:04:22.902 user 0m3.892s 00:04:22.902 sys 0m0.586s 00:04:22.902 13:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.902 13:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.902 ************************************ 00:04:22.902 END TEST env_vtophys 00:04:22.902 ************************************ 00:04:22.902 13:08:37 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:22.902 13:08:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.902 13:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.902 13:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.902 ************************************ 00:04:22.902 START TEST env_pci 00:04:22.902 ************************************ 00:04:22.902 13:08:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:23.161 00:04:23.161 00:04:23.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.161 http://cunit.sourceforge.net/ 00:04:23.161 00:04:23.161 00:04:23.161 Suite: pci 00:04:23.161 Test: pci_hook ...[2024-12-16 13:08:37.487138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56053 has claimed it 00:04:23.161 passed 00:04:23.161 00:04:23.161 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.161 suites 1 1 n/a 0 0 00:04:23.161 tests 1 1 1 0 0 00:04:23.161 asserts 25 25 25 0 n/a 00:04:23.161 00:04:23.161 Elapsed time = 0.008 seconds 00:04:23.161 EAL: Cannot find device (10000:00:01.0) 00:04:23.161 EAL: Failed to attach device on primary process 00:04:23.161 00:04:23.161 real 0m0.065s 00:04:23.161 user 0m0.033s 00:04:23.161 sys 0m0.031s 00:04:23.161 13:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.161 13:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.161 ************************************ 00:04:23.161 END TEST env_pci 00:04:23.161 ************************************ 00:04:23.161 13:08:37 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:23.161 13:08:37 -- env/env.sh@15 -- # uname 00:04:23.161 13:08:37 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:23.161 13:08:37 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:23.161 13:08:37 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.161 13:08:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:23.161 13:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.161 13:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.161 ************************************ 00:04:23.161 START TEST env_dpdk_post_init 00:04:23.161 ************************************ 00:04:23.161 13:08:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.161 EAL: Detected CPU lcores: 10 00:04:23.161 EAL: Detected NUMA nodes: 1 00:04:23.161 EAL: Detected shared linkage of DPDK 00:04:23.161 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.161 EAL: Selected IOVA mode 'PA' 00:04:23.161 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.419 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:23.419 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:23.419 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:23.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:23.420 Starting DPDK initialization... 00:04:23.420 Starting SPDK post initialization... 00:04:23.420 SPDK NVMe probe 00:04:23.420 Attaching to 0000:00:06.0 00:04:23.420 Attaching to 0000:00:07.0 00:04:23.420 Attaching to 0000:00:08.0 00:04:23.420 Attaching to 0000:00:09.0 00:04:23.420 Attached to 0000:00:06.0 00:04:23.420 Attached to 0000:00:07.0 00:04:23.420 Attached to 0000:00:09.0 00:04:23.420 Attached to 0000:00:08.0 00:04:23.420 Cleaning up... 00:04:23.420 00:04:23.420 real 0m0.219s 00:04:23.420 user 0m0.056s 00:04:23.420 sys 0m0.065s 00:04:23.420 13:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.420 13:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.420 ************************************ 00:04:23.420 END TEST env_dpdk_post_init 00:04:23.420 ************************************ 00:04:23.420 13:08:37 -- env/env.sh@26 -- # uname 00:04:23.420 13:08:37 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:23.420 13:08:37 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.420 13:08:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.420 13:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.420 13:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.420 ************************************ 00:04:23.420 START TEST env_mem_callbacks 00:04:23.420 ************************************ 00:04:23.420 13:08:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.420 EAL: Detected CPU lcores: 10 00:04:23.420 EAL: Detected NUMA nodes: 1 00:04:23.420 EAL: Detected shared linkage of DPDK 00:04:23.420 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.420 EAL: Selected IOVA mode 'PA' 00:04:23.420 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.420 00:04:23.420 00:04:23.420 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.420 http://cunit.sourceforge.net/ 00:04:23.420 00:04:23.420 00:04:23.420 Suite: memory 00:04:23.420 Test: test ... 00:04:23.420 register 0x200000200000 2097152 00:04:23.420 malloc 3145728 00:04:23.420 register 0x200000400000 4194304 00:04:23.420 buf 0x2000004fffc0 len 3145728 PASSED 00:04:23.420 malloc 64 00:04:23.420 buf 0x2000004ffec0 len 64 PASSED 00:04:23.420 malloc 4194304 00:04:23.420 register 0x200000800000 6291456 00:04:23.679 buf 0x2000009fffc0 len 4194304 PASSED 00:04:23.679 free 0x2000004fffc0 3145728 00:04:23.679 free 0x2000004ffec0 64 00:04:23.679 unregister 0x200000400000 4194304 PASSED 00:04:23.679 free 0x2000009fffc0 4194304 00:04:23.679 unregister 0x200000800000 6291456 PASSED 00:04:23.679 malloc 8388608 00:04:23.679 register 0x200000400000 10485760 00:04:23.679 buf 0x2000005fffc0 len 8388608 PASSED 00:04:23.679 free 0x2000005fffc0 8388608 00:04:23.679 unregister 0x200000400000 10485760 PASSED 00:04:23.679 passed 00:04:23.679 00:04:23.679 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.679 suites 1 1 n/a 0 0 00:04:23.679 tests 1 1 1 0 0 00:04:23.679 asserts 15 15 15 0 n/a 00:04:23.679 00:04:23.679 Elapsed time = 0.037 seconds 00:04:23.679 00:04:23.679 real 0m0.205s 00:04:23.679 user 0m0.057s 00:04:23.679 sys 0m0.047s 00:04:23.679 13:08:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.679 13:08:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.679 ************************************ 00:04:23.679 END TEST env_mem_callbacks 00:04:23.679 ************************************ 00:04:23.679 00:04:23.679 real 0m5.715s 00:04:23.679 user 0m4.430s 00:04:23.679 sys 0m0.946s 00:04:23.679 13:08:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.679 13:08:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.679 ************************************ 00:04:23.679 END TEST env 00:04:23.679 ************************************ 00:04:23.679 13:08:38 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:23.679 13:08:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.679 13:08:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.679 13:08:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.679 ************************************ 00:04:23.679 START TEST rpc 00:04:23.679 ************************************ 00:04:23.679 13:08:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:23.679 * Looking for test storage... 00:04:23.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.679 13:08:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:23.679 13:08:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:23.679 13:08:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:23.679 13:08:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:23.679 13:08:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:23.679 13:08:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:23.679 13:08:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:23.679 13:08:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:23.679 13:08:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:23.679 13:08:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.679 13:08:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:23.679 13:08:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:23.679 13:08:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:23.679 13:08:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:23.679 13:08:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:23.679 13:08:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:23.679 13:08:38 -- scripts/common.sh@344 -- # : 1 00:04:23.679 13:08:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:23.679 13:08:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.679 13:08:38 -- scripts/common.sh@364 -- # decimal 1 00:04:23.679 13:08:38 -- scripts/common.sh@352 -- # local d=1 00:04:23.679 13:08:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.679 13:08:38 -- scripts/common.sh@354 -- # echo 1 00:04:23.679 13:08:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:23.679 13:08:38 -- scripts/common.sh@365 -- # decimal 2 00:04:23.679 13:08:38 -- scripts/common.sh@352 -- # local d=2 00:04:23.679 13:08:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.679 13:08:38 -- scripts/common.sh@354 -- # echo 2 00:04:23.679 13:08:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:23.679 13:08:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:23.679 13:08:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:23.679 13:08:38 -- scripts/common.sh@367 -- # return 0 00:04:23.679 13:08:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.679 13:08:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.679 --rc genhtml_branch_coverage=1 00:04:23.679 --rc genhtml_function_coverage=1 00:04:23.679 --rc genhtml_legend=1 00:04:23.679 --rc geninfo_all_blocks=1 00:04:23.679 --rc geninfo_unexecuted_blocks=1 00:04:23.679 00:04:23.679 ' 00:04:23.679 13:08:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.679 --rc genhtml_branch_coverage=1 00:04:23.679 --rc genhtml_function_coverage=1 00:04:23.679 --rc genhtml_legend=1 00:04:23.679 --rc geninfo_all_blocks=1 00:04:23.679 --rc geninfo_unexecuted_blocks=1 00:04:23.679 00:04:23.679 ' 00:04:23.679 13:08:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.679 --rc genhtml_branch_coverage=1 00:04:23.679 --rc genhtml_function_coverage=1 00:04:23.679 --rc genhtml_legend=1 00:04:23.679 --rc geninfo_all_blocks=1 00:04:23.679 --rc geninfo_unexecuted_blocks=1 00:04:23.679 00:04:23.679 ' 00:04:23.679 13:08:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.679 --rc genhtml_branch_coverage=1 00:04:23.679 --rc genhtml_function_coverage=1 00:04:23.679 --rc genhtml_legend=1 00:04:23.679 --rc geninfo_all_blocks=1 00:04:23.679 --rc geninfo_unexecuted_blocks=1 00:04:23.679 00:04:23.679 ' 00:04:23.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.679 13:08:38 -- rpc/rpc.sh@65 -- # spdk_pid=56179 00:04:23.679 13:08:38 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.679 13:08:38 -- rpc/rpc.sh@67 -- # waitforlisten 56179 00:04:23.679 13:08:38 -- common/autotest_common.sh@829 -- # '[' -z 56179 ']' 00:04:23.679 13:08:38 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:23.679 13:08:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.679 13:08:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.679 13:08:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.679 13:08:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.679 13:08:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.938 [2024-12-16 13:08:38.310154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:23.938 [2024-12-16 13:08:38.310275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56179 ] 00:04:23.938 [2024-12-16 13:08:38.458326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.196 [2024-12-16 13:08:38.595066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:24.196 [2024-12-16 13:08:38.595225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.196 [2024-12-16 13:08:38.595237] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56179' to capture a snapshot of events at runtime. 00:04:24.196 [2024-12-16 13:08:38.595244] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56179 for offline analysis/debug. 00:04:24.196 [2024-12-16 13:08:38.595267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.763 13:08:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.763 13:08:39 -- common/autotest_common.sh@862 -- # return 0 00:04:24.763 13:08:39 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.763 13:08:39 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.763 13:08:39 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:24.763 13:08:39 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:24.763 13:08:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.763 13:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.763 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.763 ************************************ 00:04:24.763 START TEST rpc_integrity 00:04:24.763 ************************************ 00:04:24.763 13:08:39 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:24.763 13:08:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.763 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.763 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.763 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.763 13:08:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.763 13:08:39 -- rpc/rpc.sh@13 -- # jq length 00:04:24.763 13:08:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.763 13:08:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.763 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.763 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.763 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.763 13:08:39 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:24.763 13:08:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.763 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.763 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.763 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.763 13:08:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.763 { 00:04:24.763 "name": "Malloc0", 00:04:24.763 "aliases": [ 00:04:24.763 "0263b571-f57e-4b3a-aa21-ce6817c57824" 00:04:24.763 ], 00:04:24.763 "product_name": "Malloc disk", 00:04:24.763 "block_size": 512, 00:04:24.763 "num_blocks": 16384, 00:04:24.763 "uuid": "0263b571-f57e-4b3a-aa21-ce6817c57824", 00:04:24.763 "assigned_rate_limits": { 00:04:24.763 "rw_ios_per_sec": 0, 00:04:24.763 "rw_mbytes_per_sec": 0, 00:04:24.763 "r_mbytes_per_sec": 0, 00:04:24.763 "w_mbytes_per_sec": 0 00:04:24.763 }, 00:04:24.763 "claimed": false, 00:04:24.763 "zoned": false, 00:04:24.763 "supported_io_types": { 00:04:24.763 "read": true, 00:04:24.763 "write": true, 00:04:24.763 "unmap": true, 00:04:24.763 "write_zeroes": true, 00:04:24.763 "flush": true, 00:04:24.763 "reset": true, 00:04:24.763 "compare": false, 00:04:24.763 "compare_and_write": false, 00:04:24.763 "abort": true, 00:04:24.763 "nvme_admin": false, 00:04:24.763 "nvme_io": false 00:04:24.763 }, 00:04:24.763 "memory_domains": [ 00:04:24.763 { 00:04:24.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.763 "dma_device_type": 2 00:04:24.763 } 00:04:24.763 ], 00:04:24.763 "driver_specific": {} 00:04:24.763 } 00:04:24.763 ]' 00:04:24.763 13:08:39 -- rpc/rpc.sh@17 -- # jq length 00:04:24.763 13:08:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.763 13:08:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:24.763 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.763 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.763 [2024-12-16 13:08:39.218162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:24.764 [2024-12-16 13:08:39.218210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.764 [2024-12-16 13:08:39.218226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:24.764 [2024-12-16 13:08:39.218235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.764 [2024-12-16 13:08:39.219874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.764 [2024-12-16 13:08:39.219904] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.764 Passthru0 00:04:24.764 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.764 13:08:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.764 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.764 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.764 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.764 13:08:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.764 { 00:04:24.764 "name": "Malloc0", 00:04:24.764 "aliases": [ 00:04:24.764 "0263b571-f57e-4b3a-aa21-ce6817c57824" 00:04:24.764 ], 00:04:24.764 "product_name": "Malloc disk", 00:04:24.764 "block_size": 512, 00:04:24.764 "num_blocks": 16384, 00:04:24.764 "uuid": "0263b571-f57e-4b3a-aa21-ce6817c57824", 00:04:24.764 "assigned_rate_limits": { 00:04:24.764 "rw_ios_per_sec": 0, 00:04:24.764 "rw_mbytes_per_sec": 0, 00:04:24.764 "r_mbytes_per_sec": 0, 00:04:24.764 "w_mbytes_per_sec": 0 00:04:24.764 }, 00:04:24.764 "claimed": true, 00:04:24.764 "claim_type": "exclusive_write", 00:04:24.764 "zoned": false, 00:04:24.764 "supported_io_types": { 00:04:24.764 "read": true, 00:04:24.764 "write": true, 00:04:24.764 "unmap": true, 00:04:24.764 "write_zeroes": true, 00:04:24.764 "flush": true, 00:04:24.764 "reset": true, 00:04:24.764 "compare": false, 00:04:24.764 "compare_and_write": false, 00:04:24.764 "abort": true, 00:04:24.764 "nvme_admin": false, 00:04:24.764 "nvme_io": false 00:04:24.764 }, 00:04:24.764 "memory_domains": [ 00:04:24.764 { 00:04:24.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.764 "dma_device_type": 2 00:04:24.764 } 00:04:24.764 ], 00:04:24.764 "driver_specific": {} 00:04:24.764 }, 00:04:24.764 { 00:04:24.764 "name": "Passthru0", 00:04:24.764 "aliases": [ 00:04:24.764 "734a203f-1afc-5f72-859c-e687466d4f6b" 00:04:24.764 ], 00:04:24.764 "product_name": "passthru", 00:04:24.764 "block_size": 512, 00:04:24.764 "num_blocks": 16384, 00:04:24.764 "uuid": "734a203f-1afc-5f72-859c-e687466d4f6b", 00:04:24.764 "assigned_rate_limits": { 00:04:24.764 "rw_ios_per_sec": 0, 00:04:24.764 "rw_mbytes_per_sec": 0, 00:04:24.764 "r_mbytes_per_sec": 0, 00:04:24.764 "w_mbytes_per_sec": 0 00:04:24.764 }, 00:04:24.764 "claimed": false, 00:04:24.764 "zoned": false, 00:04:24.764 "supported_io_types": { 00:04:24.764 "read": true, 00:04:24.764 "write": true, 00:04:24.764 "unmap": true, 00:04:24.764 "write_zeroes": true, 00:04:24.764 "flush": true, 00:04:24.764 "reset": true, 00:04:24.764 "compare": false, 00:04:24.764 "compare_and_write": false, 00:04:24.764 "abort": true, 00:04:24.764 "nvme_admin": false, 00:04:24.764 "nvme_io": false 00:04:24.764 }, 00:04:24.764 "memory_domains": [ 00:04:24.764 { 00:04:24.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.764 "dma_device_type": 2 00:04:24.764 } 00:04:24.764 ], 00:04:24.764 "driver_specific": { 00:04:24.764 "passthru": { 00:04:24.764 "name": "Passthru0", 00:04:24.764 "base_bdev_name": "Malloc0" 00:04:24.764 } 00:04:24.764 } 00:04:24.764 } 00:04:24.764 ]' 00:04:24.764 13:08:39 -- rpc/rpc.sh@21 -- # jq length 00:04:24.764 13:08:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.764 13:08:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.764 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.764 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.764 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.764 13:08:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.764 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.764 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.764 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.764 13:08:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.764 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.764 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.764 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.764 13:08:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.764 13:08:39 -- rpc/rpc.sh@26 -- # jq length 00:04:24.764 13:08:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.764 00:04:24.764 real 0m0.219s 00:04:24.764 user 0m0.124s 00:04:24.764 sys 0m0.031s 00:04:24.764 ************************************ 00:04:24.764 END TEST rpc_integrity 00:04:24.764 ************************************ 00:04:24.764 13:08:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.764 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.022 13:08:39 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.023 13:08:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.023 13:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 ************************************ 00:04:25.023 START TEST rpc_plugins 00:04:25.023 ************************************ 00:04:25.023 13:08:39 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:25.023 13:08:39 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.023 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.023 13:08:39 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.023 13:08:39 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.023 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.023 13:08:39 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.023 { 00:04:25.023 "name": "Malloc1", 00:04:25.023 "aliases": [ 00:04:25.023 "7addfeca-2183-4646-a061-064c8df5089c" 00:04:25.023 ], 00:04:25.023 "product_name": "Malloc disk", 00:04:25.023 "block_size": 4096, 00:04:25.023 "num_blocks": 256, 00:04:25.023 "uuid": "7addfeca-2183-4646-a061-064c8df5089c", 00:04:25.023 "assigned_rate_limits": { 00:04:25.023 "rw_ios_per_sec": 0, 00:04:25.023 "rw_mbytes_per_sec": 0, 00:04:25.023 "r_mbytes_per_sec": 0, 00:04:25.023 "w_mbytes_per_sec": 0 00:04:25.023 }, 00:04:25.023 "claimed": false, 00:04:25.023 "zoned": false, 00:04:25.023 "supported_io_types": { 00:04:25.023 "read": true, 00:04:25.023 "write": true, 00:04:25.023 "unmap": true, 00:04:25.023 "write_zeroes": true, 00:04:25.023 "flush": true, 00:04:25.023 "reset": true, 00:04:25.023 "compare": false, 00:04:25.023 "compare_and_write": false, 00:04:25.023 "abort": true, 00:04:25.023 "nvme_admin": false, 00:04:25.023 "nvme_io": false 00:04:25.023 }, 00:04:25.023 "memory_domains": [ 00:04:25.023 { 00:04:25.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.023 "dma_device_type": 2 00:04:25.023 } 00:04:25.023 ], 00:04:25.023 "driver_specific": {} 00:04:25.023 } 00:04:25.023 ]' 00:04:25.023 13:08:39 -- rpc/rpc.sh@32 -- # jq length 00:04:25.023 13:08:39 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:25.023 13:08:39 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:25.023 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.023 13:08:39 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:25.023 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.023 13:08:39 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:25.023 13:08:39 -- rpc/rpc.sh@36 -- # jq length 00:04:25.023 13:08:39 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:25.023 00:04:25.023 real 0m0.112s 00:04:25.023 user 0m0.062s 00:04:25.023 sys 0m0.012s 00:04:25.023 ************************************ 00:04:25.023 END TEST rpc_plugins 00:04:25.023 13:08:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 ************************************ 00:04:25.023 13:08:39 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:25.023 13:08:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.023 13:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 ************************************ 00:04:25.023 START TEST rpc_trace_cmd_test 00:04:25.023 ************************************ 00:04:25.023 13:08:39 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:25.023 13:08:39 -- rpc/rpc.sh@40 -- # local info 00:04:25.023 13:08:39 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:25.023 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.023 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.023 13:08:39 -- rpc/rpc.sh@42 -- # info='{ 00:04:25.023 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56179", 00:04:25.023 "tpoint_group_mask": "0x8", 00:04:25.023 "iscsi_conn": { 00:04:25.023 "mask": "0x2", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "scsi": { 00:04:25.023 "mask": "0x4", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "bdev": { 00:04:25.023 "mask": "0x8", 00:04:25.023 "tpoint_mask": "0xffffffffffffffff" 00:04:25.023 }, 00:04:25.023 "nvmf_rdma": { 00:04:25.023 "mask": "0x10", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "nvmf_tcp": { 00:04:25.023 "mask": "0x20", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "ftl": { 00:04:25.023 "mask": "0x40", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "blobfs": { 00:04:25.023 "mask": "0x80", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "dsa": { 00:04:25.023 "mask": "0x200", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "thread": { 00:04:25.023 "mask": "0x400", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "nvme_pcie": { 00:04:25.023 "mask": "0x800", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "iaa": { 00:04:25.023 "mask": "0x1000", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "nvme_tcp": { 00:04:25.023 "mask": "0x2000", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 }, 00:04:25.023 "bdev_nvme": { 00:04:25.023 "mask": "0x4000", 00:04:25.023 "tpoint_mask": "0x0" 00:04:25.023 } 00:04:25.023 }' 00:04:25.023 13:08:39 -- rpc/rpc.sh@43 -- # jq length 00:04:25.023 13:08:39 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:25.023 13:08:39 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:25.281 13:08:39 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:25.282 13:08:39 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:25.282 13:08:39 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:25.282 13:08:39 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:25.282 13:08:39 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:25.282 13:08:39 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:25.282 13:08:39 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:25.282 00:04:25.282 real 0m0.176s 00:04:25.282 user 0m0.138s 00:04:25.282 sys 0m0.027s 00:04:25.282 13:08:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 ************************************ 00:04:25.282 END TEST rpc_trace_cmd_test 00:04:25.282 ************************************ 00:04:25.282 13:08:39 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:25.282 13:08:39 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:25.282 13:08:39 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:25.282 13:08:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.282 13:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 ************************************ 00:04:25.282 START TEST rpc_daemon_integrity 00:04:25.282 ************************************ 00:04:25.282 13:08:39 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:25.282 13:08:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.282 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.282 13:08:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.282 13:08:39 -- rpc/rpc.sh@13 -- # jq length 00:04:25.282 13:08:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.282 13:08:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.282 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.282 13:08:39 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:25.282 13:08:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.282 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.282 13:08:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.282 { 00:04:25.282 "name": "Malloc2", 00:04:25.282 "aliases": [ 00:04:25.282 "d8b844f5-a9a0-480e-b5da-151aa96da643" 00:04:25.282 ], 00:04:25.282 "product_name": "Malloc disk", 00:04:25.282 "block_size": 512, 00:04:25.282 "num_blocks": 16384, 00:04:25.282 "uuid": "d8b844f5-a9a0-480e-b5da-151aa96da643", 00:04:25.282 "assigned_rate_limits": { 00:04:25.282 "rw_ios_per_sec": 0, 00:04:25.282 "rw_mbytes_per_sec": 0, 00:04:25.282 "r_mbytes_per_sec": 0, 00:04:25.282 "w_mbytes_per_sec": 0 00:04:25.282 }, 00:04:25.282 "claimed": false, 00:04:25.282 "zoned": false, 00:04:25.282 "supported_io_types": { 00:04:25.282 "read": true, 00:04:25.282 "write": true, 00:04:25.282 "unmap": true, 00:04:25.282 "write_zeroes": true, 00:04:25.282 "flush": true, 00:04:25.282 "reset": true, 00:04:25.282 "compare": false, 00:04:25.282 "compare_and_write": false, 00:04:25.282 "abort": true, 00:04:25.282 "nvme_admin": false, 00:04:25.282 "nvme_io": false 00:04:25.282 }, 00:04:25.282 "memory_domains": [ 00:04:25.282 { 00:04:25.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.282 "dma_device_type": 2 00:04:25.282 } 00:04:25.282 ], 00:04:25.282 "driver_specific": {} 00:04:25.282 } 00:04:25.282 ]' 00:04:25.282 13:08:39 -- rpc/rpc.sh@17 -- # jq length 00:04:25.282 13:08:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.282 13:08:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:25.282 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 [2024-12-16 13:08:39.833781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:25.282 [2024-12-16 13:08:39.833827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.282 [2024-12-16 13:08:39.833842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:25.282 [2024-12-16 13:08:39.833851] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.282 [2024-12-16 13:08:39.835449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.282 [2024-12-16 13:08:39.835479] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.282 Passthru0 00:04:25.282 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.282 13:08:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.282 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.282 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.541 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.541 13:08:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.541 { 00:04:25.541 "name": "Malloc2", 00:04:25.541 "aliases": [ 00:04:25.541 "d8b844f5-a9a0-480e-b5da-151aa96da643" 00:04:25.541 ], 00:04:25.541 "product_name": "Malloc disk", 00:04:25.541 "block_size": 512, 00:04:25.541 "num_blocks": 16384, 00:04:25.541 "uuid": "d8b844f5-a9a0-480e-b5da-151aa96da643", 00:04:25.541 "assigned_rate_limits": { 00:04:25.541 "rw_ios_per_sec": 0, 00:04:25.541 "rw_mbytes_per_sec": 0, 00:04:25.541 "r_mbytes_per_sec": 0, 00:04:25.541 "w_mbytes_per_sec": 0 00:04:25.541 }, 00:04:25.541 "claimed": true, 00:04:25.541 "claim_type": "exclusive_write", 00:04:25.541 "zoned": false, 00:04:25.541 "supported_io_types": { 00:04:25.541 "read": true, 00:04:25.541 "write": true, 00:04:25.541 "unmap": true, 00:04:25.541 "write_zeroes": true, 00:04:25.541 "flush": true, 00:04:25.541 "reset": true, 00:04:25.541 "compare": false, 00:04:25.541 "compare_and_write": false, 00:04:25.541 "abort": true, 00:04:25.541 "nvme_admin": false, 00:04:25.541 "nvme_io": false 00:04:25.541 }, 00:04:25.541 "memory_domains": [ 00:04:25.541 { 00:04:25.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.541 "dma_device_type": 2 00:04:25.541 } 00:04:25.541 ], 00:04:25.541 "driver_specific": {} 00:04:25.541 }, 00:04:25.541 { 00:04:25.541 "name": "Passthru0", 00:04:25.541 "aliases": [ 00:04:25.541 "69f6f775-abbe-5604-8cda-8e65cdf91a25" 00:04:25.541 ], 00:04:25.541 "product_name": "passthru", 00:04:25.541 "block_size": 512, 00:04:25.541 "num_blocks": 16384, 00:04:25.541 "uuid": "69f6f775-abbe-5604-8cda-8e65cdf91a25", 00:04:25.541 "assigned_rate_limits": { 00:04:25.541 "rw_ios_per_sec": 0, 00:04:25.541 "rw_mbytes_per_sec": 0, 00:04:25.541 "r_mbytes_per_sec": 0, 00:04:25.541 "w_mbytes_per_sec": 0 00:04:25.541 }, 00:04:25.541 "claimed": false, 00:04:25.541 "zoned": false, 00:04:25.541 "supported_io_types": { 00:04:25.541 "read": true, 00:04:25.541 "write": true, 00:04:25.541 "unmap": true, 00:04:25.541 "write_zeroes": true, 00:04:25.541 "flush": true, 00:04:25.541 "reset": true, 00:04:25.541 "compare": false, 00:04:25.541 "compare_and_write": false, 00:04:25.541 "abort": true, 00:04:25.541 "nvme_admin": false, 00:04:25.541 "nvme_io": false 00:04:25.541 }, 00:04:25.541 "memory_domains": [ 00:04:25.541 { 00:04:25.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.541 "dma_device_type": 2 00:04:25.541 } 00:04:25.541 ], 00:04:25.541 "driver_specific": { 00:04:25.541 "passthru": { 00:04:25.541 "name": "Passthru0", 00:04:25.541 "base_bdev_name": "Malloc2" 00:04:25.541 } 00:04:25.541 } 00:04:25.541 } 00:04:25.541 ]' 00:04:25.541 13:08:39 -- rpc/rpc.sh@21 -- # jq length 00:04:25.541 13:08:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.541 13:08:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.541 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.541 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.541 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.541 13:08:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:25.541 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.541 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.541 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.541 13:08:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.541 13:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.541 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.541 13:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.541 13:08:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.541 13:08:39 -- rpc/rpc.sh@26 -- # jq length 00:04:25.541 13:08:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.541 00:04:25.541 real 0m0.229s 00:04:25.541 user 0m0.124s 00:04:25.541 sys 0m0.036s 00:04:25.541 13:08:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.541 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.541 ************************************ 00:04:25.541 END TEST rpc_daemon_integrity 00:04:25.541 ************************************ 00:04:25.541 13:08:39 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:25.541 13:08:39 -- rpc/rpc.sh@84 -- # killprocess 56179 00:04:25.541 13:08:39 -- common/autotest_common.sh@936 -- # '[' -z 56179 ']' 00:04:25.541 13:08:39 -- common/autotest_common.sh@940 -- # kill -0 56179 00:04:25.541 13:08:39 -- common/autotest_common.sh@941 -- # uname 00:04:25.541 13:08:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.541 13:08:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56179 00:04:25.541 13:08:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.541 killing process with pid 56179 00:04:25.541 13:08:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.541 13:08:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56179' 00:04:25.541 13:08:40 -- common/autotest_common.sh@955 -- # kill 56179 00:04:25.541 13:08:40 -- common/autotest_common.sh@960 -- # wait 56179 00:04:26.916 00:04:26.916 real 0m3.090s 00:04:26.916 user 0m3.497s 00:04:26.916 sys 0m0.558s 00:04:26.916 13:08:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.916 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:26.916 ************************************ 00:04:26.916 END TEST rpc 00:04:26.916 ************************************ 00:04:26.916 13:08:41 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.916 13:08:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.916 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:26.916 ************************************ 00:04:26.916 START TEST rpc_client 00:04:26.916 ************************************ 00:04:26.916 13:08:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.916 * Looking for test storage... 00:04:26.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:26.916 13:08:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.916 13:08:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.916 13:08:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.916 13:08:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.916 13:08:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.916 13:08:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.916 13:08:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.916 13:08:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.916 13:08:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.916 13:08:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.916 13:08:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.916 13:08:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.916 13:08:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.916 13:08:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.916 13:08:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.916 13:08:41 -- scripts/common.sh@344 -- # : 1 00:04:26.916 13:08:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.916 13:08:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.916 13:08:41 -- scripts/common.sh@364 -- # decimal 1 00:04:26.916 13:08:41 -- scripts/common.sh@352 -- # local d=1 00:04:26.916 13:08:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.916 13:08:41 -- scripts/common.sh@354 -- # echo 1 00:04:26.916 13:08:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.916 13:08:41 -- scripts/common.sh@365 -- # decimal 2 00:04:26.916 13:08:41 -- scripts/common.sh@352 -- # local d=2 00:04:26.916 13:08:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.916 13:08:41 -- scripts/common.sh@354 -- # echo 2 00:04:26.916 13:08:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.916 13:08:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.916 13:08:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.916 13:08:41 -- scripts/common.sh@367 -- # return 0 00:04:26.916 13:08:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 13:08:41 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:26.916 OK 00:04:26.916 13:08:41 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:26.916 00:04:26.916 real 0m0.173s 00:04:26.916 user 0m0.108s 00:04:26.916 sys 0m0.073s 00:04:26.916 13:08:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.916 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:26.916 ************************************ 00:04:26.916 END TEST rpc_client 00:04:26.916 ************************************ 00:04:26.916 13:08:41 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.916 13:08:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.916 13:08:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.917 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 ************************************ 00:04:26.917 START TEST json_config 00:04:26.917 ************************************ 00:04:26.917 13:08:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.917 13:08:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.917 13:08:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.917 13:08:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:27.176 13:08:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:27.176 13:08:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:27.176 13:08:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:27.176 13:08:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:27.176 13:08:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:27.176 13:08:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:27.176 13:08:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.176 13:08:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:27.176 13:08:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:27.176 13:08:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:27.176 13:08:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:27.176 13:08:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:27.176 13:08:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:27.176 13:08:41 -- scripts/common.sh@344 -- # : 1 00:04:27.176 13:08:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:27.176 13:08:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.176 13:08:41 -- scripts/common.sh@364 -- # decimal 1 00:04:27.176 13:08:41 -- scripts/common.sh@352 -- # local d=1 00:04:27.176 13:08:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.176 13:08:41 -- scripts/common.sh@354 -- # echo 1 00:04:27.176 13:08:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:27.176 13:08:41 -- scripts/common.sh@365 -- # decimal 2 00:04:27.176 13:08:41 -- scripts/common.sh@352 -- # local d=2 00:04:27.176 13:08:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.176 13:08:41 -- scripts/common.sh@354 -- # echo 2 00:04:27.176 13:08:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:27.176 13:08:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:27.176 13:08:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:27.176 13:08:41 -- scripts/common.sh@367 -- # return 0 00:04:27.176 13:08:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.176 13:08:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 13:08:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 13:08:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 13:08:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 13:08:41 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.176 13:08:41 -- nvmf/common.sh@7 -- # uname -s 00:04:27.176 13:08:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.176 13:08:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.176 13:08:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.176 13:08:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.176 13:08:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.176 13:08:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.176 13:08:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.176 13:08:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.176 13:08:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.176 13:08:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.176 13:08:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee652fb3-397f-4785-b30f-e769daa7efa1 00:04:27.176 13:08:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=ee652fb3-397f-4785-b30f-e769daa7efa1 00:04:27.176 13:08:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.176 13:08:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.176 13:08:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.176 13:08:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.176 13:08:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.176 13:08:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.176 13:08:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.176 13:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 13:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 13:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 13:08:41 -- paths/export.sh@5 -- # export PATH 00:04:27.176 13:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 13:08:41 -- nvmf/common.sh@46 -- # : 0 00:04:27.176 13:08:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:27.176 13:08:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:27.176 13:08:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:27.176 13:08:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.176 13:08:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.176 13:08:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:27.176 13:08:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:27.176 13:08:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:27.176 13:08:41 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:27.176 13:08:41 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:27.176 13:08:41 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:27.176 13:08:41 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:27.176 WARNING: No tests are enabled so not running JSON configuration tests 00:04:27.176 13:08:41 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:27.177 13:08:41 -- json_config/json_config.sh@27 -- # exit 0 00:04:27.177 00:04:27.177 real 0m0.130s 00:04:27.177 user 0m0.092s 00:04:27.177 sys 0m0.043s 00:04:27.177 13:08:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.177 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.177 ************************************ 00:04:27.177 END TEST json_config 00:04:27.177 ************************************ 00:04:27.177 13:08:41 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.177 13:08:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.177 13:08:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.177 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.177 ************************************ 00:04:27.177 START TEST json_config_extra_key 00:04:27.177 ************************************ 00:04:27.177 13:08:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.177 13:08:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:27.177 13:08:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:27.177 13:08:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:27.177 13:08:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:27.177 13:08:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:27.177 13:08:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:27.177 13:08:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:27.177 13:08:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:27.177 13:08:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:27.177 13:08:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.177 13:08:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:27.177 13:08:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:27.177 13:08:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:27.177 13:08:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:27.177 13:08:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:27.177 13:08:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:27.177 13:08:41 -- scripts/common.sh@344 -- # : 1 00:04:27.177 13:08:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:27.177 13:08:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.177 13:08:41 -- scripts/common.sh@364 -- # decimal 1 00:04:27.177 13:08:41 -- scripts/common.sh@352 -- # local d=1 00:04:27.177 13:08:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.177 13:08:41 -- scripts/common.sh@354 -- # echo 1 00:04:27.177 13:08:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:27.177 13:08:41 -- scripts/common.sh@365 -- # decimal 2 00:04:27.177 13:08:41 -- scripts/common.sh@352 -- # local d=2 00:04:27.177 13:08:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.177 13:08:41 -- scripts/common.sh@354 -- # echo 2 00:04:27.177 13:08:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:27.177 13:08:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:27.177 13:08:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:27.177 13:08:41 -- scripts/common.sh@367 -- # return 0 00:04:27.177 13:08:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.177 13:08:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.177 --rc genhtml_branch_coverage=1 00:04:27.177 --rc genhtml_function_coverage=1 00:04:27.177 --rc genhtml_legend=1 00:04:27.177 --rc geninfo_all_blocks=1 00:04:27.177 --rc geninfo_unexecuted_blocks=1 00:04:27.177 00:04:27.177 ' 00:04:27.177 13:08:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.177 --rc genhtml_branch_coverage=1 00:04:27.177 --rc genhtml_function_coverage=1 00:04:27.177 --rc genhtml_legend=1 00:04:27.177 --rc geninfo_all_blocks=1 00:04:27.177 --rc geninfo_unexecuted_blocks=1 00:04:27.177 00:04:27.177 ' 00:04:27.177 13:08:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.177 --rc genhtml_branch_coverage=1 00:04:27.177 --rc genhtml_function_coverage=1 00:04:27.177 --rc genhtml_legend=1 00:04:27.177 --rc geninfo_all_blocks=1 00:04:27.177 --rc geninfo_unexecuted_blocks=1 00:04:27.177 00:04:27.177 ' 00:04:27.177 13:08:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.177 --rc genhtml_branch_coverage=1 00:04:27.177 --rc genhtml_function_coverage=1 00:04:27.177 --rc genhtml_legend=1 00:04:27.177 --rc geninfo_all_blocks=1 00:04:27.177 --rc geninfo_unexecuted_blocks=1 00:04:27.177 00:04:27.177 ' 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.177 13:08:41 -- nvmf/common.sh@7 -- # uname -s 00:04:27.177 13:08:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.177 13:08:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.177 13:08:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.177 13:08:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.177 13:08:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.177 13:08:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.177 13:08:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.177 13:08:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.177 13:08:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.177 13:08:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.177 13:08:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee652fb3-397f-4785-b30f-e769daa7efa1 00:04:27.177 13:08:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=ee652fb3-397f-4785-b30f-e769daa7efa1 00:04:27.177 13:08:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.177 13:08:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.177 13:08:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.177 13:08:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.177 13:08:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.177 13:08:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.177 13:08:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.177 13:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.177 13:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.177 13:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.177 13:08:41 -- paths/export.sh@5 -- # export PATH 00:04:27.177 13:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.177 13:08:41 -- nvmf/common.sh@46 -- # : 0 00:04:27.177 13:08:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:27.177 13:08:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:27.177 13:08:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:27.177 13:08:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.177 13:08:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.177 13:08:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:27.177 13:08:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:27.177 13:08:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.177 INFO: launching applications... 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56473 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.177 Waiting for target to run... 00:04:27.177 13:08:41 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56473 /var/tmp/spdk_tgt.sock 00:04:27.177 13:08:41 -- common/autotest_common.sh@829 -- # '[' -z 56473 ']' 00:04:27.177 13:08:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.177 13:08:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.177 13:08:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.177 13:08:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.177 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.436 [2024-12-16 13:08:41.812460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:27.436 [2024-12-16 13:08:41.812573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56473 ] 00:04:27.694 [2024-12-16 13:08:42.107840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.952 [2024-12-16 13:08:42.268859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:27.952 [2024-12-16 13:08:42.269059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.887 13:08:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.887 13:08:43 -- common/autotest_common.sh@862 -- # return 0 00:04:28.887 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:28.887 INFO: shutting down applications... 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56473 ]] 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56473 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56473 00:04:28.887 13:08:43 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:29.455 13:08:43 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:29.455 13:08:43 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:29.455 13:08:43 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56473 00:04:29.455 13:08:43 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:29.714 13:08:44 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:29.714 13:08:44 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:29.714 13:08:44 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56473 00:04:29.714 13:08:44 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:30.281 13:08:44 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:30.281 13:08:44 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:30.281 13:08:44 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56473 00:04:30.281 13:08:44 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56473 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:30.921 SPDK target shutdown done 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:30.921 Success 00:04:30.921 13:08:45 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:30.921 00:04:30.921 real 0m3.643s 00:04:30.921 user 0m3.217s 00:04:30.921 sys 0m0.379s 00:04:30.921 13:08:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.921 ************************************ 00:04:30.921 END TEST json_config_extra_key 00:04:30.921 ************************************ 00:04:30.921 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:30.921 13:08:45 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.921 13:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.921 13:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.921 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:30.921 ************************************ 00:04:30.921 START TEST alias_rpc 00:04:30.921 ************************************ 00:04:30.921 13:08:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.921 * Looking for test storage... 00:04:30.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:30.921 13:08:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:30.921 13:08:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:30.921 13:08:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.921 13:08:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.921 13:08:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.921 13:08:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.921 13:08:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.921 13:08:45 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.921 13:08:45 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.921 13:08:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.921 13:08:45 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.921 13:08:45 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.921 13:08:45 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.921 13:08:45 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.921 13:08:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.921 13:08:45 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.921 13:08:45 -- scripts/common.sh@344 -- # : 1 00:04:30.921 13:08:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.921 13:08:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.921 13:08:45 -- scripts/common.sh@364 -- # decimal 1 00:04:30.921 13:08:45 -- scripts/common.sh@352 -- # local d=1 00:04:30.921 13:08:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.921 13:08:45 -- scripts/common.sh@354 -- # echo 1 00:04:30.921 13:08:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.921 13:08:45 -- scripts/common.sh@365 -- # decimal 2 00:04:30.921 13:08:45 -- scripts/common.sh@352 -- # local d=2 00:04:30.921 13:08:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.921 13:08:45 -- scripts/common.sh@354 -- # echo 2 00:04:30.921 13:08:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.921 13:08:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.921 13:08:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.921 13:08:45 -- scripts/common.sh@367 -- # return 0 00:04:30.921 13:08:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.921 13:08:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.921 --rc genhtml_branch_coverage=1 00:04:30.921 --rc genhtml_function_coverage=1 00:04:30.921 --rc genhtml_legend=1 00:04:30.921 --rc geninfo_all_blocks=1 00:04:30.921 --rc geninfo_unexecuted_blocks=1 00:04:30.921 00:04:30.921 ' 00:04:30.921 13:08:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.921 --rc genhtml_branch_coverage=1 00:04:30.921 --rc genhtml_function_coverage=1 00:04:30.921 --rc genhtml_legend=1 00:04:30.921 --rc geninfo_all_blocks=1 00:04:30.921 --rc geninfo_unexecuted_blocks=1 00:04:30.921 00:04:30.921 ' 00:04:30.921 13:08:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.921 --rc genhtml_branch_coverage=1 00:04:30.921 --rc genhtml_function_coverage=1 00:04:30.921 --rc genhtml_legend=1 00:04:30.921 --rc geninfo_all_blocks=1 00:04:30.921 --rc geninfo_unexecuted_blocks=1 00:04:30.921 00:04:30.921 ' 00:04:30.921 13:08:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.921 --rc genhtml_branch_coverage=1 00:04:30.921 --rc genhtml_function_coverage=1 00:04:30.921 --rc genhtml_legend=1 00:04:30.921 --rc geninfo_all_blocks=1 00:04:30.921 --rc geninfo_unexecuted_blocks=1 00:04:30.921 00:04:30.921 ' 00:04:30.921 13:08:45 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.921 13:08:45 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56578 00:04:30.921 13:08:45 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56578 00:04:30.921 13:08:45 -- common/autotest_common.sh@829 -- # '[' -z 56578 ']' 00:04:30.921 13:08:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.921 13:08:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.921 13:08:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.921 13:08:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.921 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:30.921 13:08:45 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.921 [2024-12-16 13:08:45.472984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:30.921 [2024-12-16 13:08:45.473102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56578 ] 00:04:31.179 [2024-12-16 13:08:45.621724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.437 [2024-12-16 13:08:45.764255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:31.437 [2024-12-16 13:08:45.764416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.002 13:08:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.002 13:08:46 -- common/autotest_common.sh@862 -- # return 0 00:04:32.002 13:08:46 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:32.002 13:08:46 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56578 00:04:32.002 13:08:46 -- common/autotest_common.sh@936 -- # '[' -z 56578 ']' 00:04:32.002 13:08:46 -- common/autotest_common.sh@940 -- # kill -0 56578 00:04:32.002 13:08:46 -- common/autotest_common.sh@941 -- # uname 00:04:32.002 13:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:32.002 13:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56578 00:04:32.002 13:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:32.002 killing process with pid 56578 00:04:32.002 13:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:32.002 13:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56578' 00:04:32.002 13:08:46 -- common/autotest_common.sh@955 -- # kill 56578 00:04:32.002 13:08:46 -- common/autotest_common.sh@960 -- # wait 56578 00:04:33.379 00:04:33.379 real 0m2.415s 00:04:33.379 user 0m2.516s 00:04:33.379 sys 0m0.359s 00:04:33.379 ************************************ 00:04:33.379 END TEST alias_rpc 00:04:33.379 ************************************ 00:04:33.379 13:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.379 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.379 13:08:47 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:33.379 13:08:47 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:33.379 13:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.379 13:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.379 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.379 ************************************ 00:04:33.379 START TEST spdkcli_tcp 00:04:33.379 ************************************ 00:04:33.379 13:08:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:33.379 * Looking for test storage... 00:04:33.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:33.379 13:08:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.379 13:08:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.379 13:08:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.379 13:08:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.379 13:08:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.379 13:08:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.379 13:08:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.379 13:08:47 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.379 13:08:47 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.379 13:08:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.379 13:08:47 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.379 13:08:47 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.379 13:08:47 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.379 13:08:47 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.379 13:08:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.379 13:08:47 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.379 13:08:47 -- scripts/common.sh@344 -- # : 1 00:04:33.379 13:08:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.379 13:08:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.379 13:08:47 -- scripts/common.sh@364 -- # decimal 1 00:04:33.379 13:08:47 -- scripts/common.sh@352 -- # local d=1 00:04:33.379 13:08:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.379 13:08:47 -- scripts/common.sh@354 -- # echo 1 00:04:33.379 13:08:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.379 13:08:47 -- scripts/common.sh@365 -- # decimal 2 00:04:33.379 13:08:47 -- scripts/common.sh@352 -- # local d=2 00:04:33.379 13:08:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.379 13:08:47 -- scripts/common.sh@354 -- # echo 2 00:04:33.379 13:08:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.379 13:08:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.379 13:08:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.379 13:08:47 -- scripts/common.sh@367 -- # return 0 00:04:33.379 13:08:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.379 13:08:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.379 --rc genhtml_branch_coverage=1 00:04:33.379 --rc genhtml_function_coverage=1 00:04:33.379 --rc genhtml_legend=1 00:04:33.379 --rc geninfo_all_blocks=1 00:04:33.379 --rc geninfo_unexecuted_blocks=1 00:04:33.379 00:04:33.379 ' 00:04:33.379 13:08:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.379 --rc genhtml_branch_coverage=1 00:04:33.379 --rc genhtml_function_coverage=1 00:04:33.379 --rc genhtml_legend=1 00:04:33.379 --rc geninfo_all_blocks=1 00:04:33.379 --rc geninfo_unexecuted_blocks=1 00:04:33.379 00:04:33.379 ' 00:04:33.379 13:08:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.379 --rc genhtml_branch_coverage=1 00:04:33.379 --rc genhtml_function_coverage=1 00:04:33.379 --rc genhtml_legend=1 00:04:33.379 --rc geninfo_all_blocks=1 00:04:33.379 --rc geninfo_unexecuted_blocks=1 00:04:33.379 00:04:33.379 ' 00:04:33.379 13:08:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.379 --rc genhtml_branch_coverage=1 00:04:33.379 --rc genhtml_function_coverage=1 00:04:33.379 --rc genhtml_legend=1 00:04:33.379 --rc geninfo_all_blocks=1 00:04:33.379 --rc geninfo_unexecuted_blocks=1 00:04:33.379 00:04:33.379 ' 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:33.379 13:08:47 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:33.379 13:08:47 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:33.379 13:08:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.379 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56662 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@27 -- # waitforlisten 56662 00:04:33.379 13:08:47 -- common/autotest_common.sh@829 -- # '[' -z 56662 ']' 00:04:33.379 13:08:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.379 13:08:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.379 13:08:47 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:33.379 13:08:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.380 13:08:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.380 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.638 [2024-12-16 13:08:47.969145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:33.638 [2024-12-16 13:08:47.969249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56662 ] 00:04:33.638 [2024-12-16 13:08:48.117848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.896 [2024-12-16 13:08:48.259358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:33.896 [2024-12-16 13:08:48.259725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.896 [2024-12-16 13:08:48.259743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.461 13:08:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.461 13:08:48 -- common/autotest_common.sh@862 -- # return 0 00:04:34.461 13:08:48 -- spdkcli/tcp.sh@31 -- # socat_pid=56679 00:04:34.461 13:08:48 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:34.461 13:08:48 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:34.461 [ 00:04:34.461 "bdev_malloc_delete", 00:04:34.461 "bdev_malloc_create", 00:04:34.461 "bdev_null_resize", 00:04:34.461 "bdev_null_delete", 00:04:34.461 "bdev_null_create", 00:04:34.461 "bdev_nvme_cuse_unregister", 00:04:34.461 "bdev_nvme_cuse_register", 00:04:34.461 "bdev_opal_new_user", 00:04:34.461 "bdev_opal_set_lock_state", 00:04:34.461 "bdev_opal_delete", 00:04:34.461 "bdev_opal_get_info", 00:04:34.461 "bdev_opal_create", 00:04:34.461 "bdev_nvme_opal_revert", 00:04:34.461 "bdev_nvme_opal_init", 00:04:34.461 "bdev_nvme_send_cmd", 00:04:34.461 "bdev_nvme_get_path_iostat", 00:04:34.461 "bdev_nvme_get_mdns_discovery_info", 00:04:34.461 "bdev_nvme_stop_mdns_discovery", 00:04:34.462 "bdev_nvme_start_mdns_discovery", 00:04:34.462 "bdev_nvme_set_multipath_policy", 00:04:34.462 "bdev_nvme_set_preferred_path", 00:04:34.462 "bdev_nvme_get_io_paths", 00:04:34.462 "bdev_nvme_remove_error_injection", 00:04:34.462 "bdev_nvme_add_error_injection", 00:04:34.462 "bdev_nvme_get_discovery_info", 00:04:34.462 "bdev_nvme_stop_discovery", 00:04:34.462 "bdev_nvme_start_discovery", 00:04:34.462 "bdev_nvme_get_controller_health_info", 00:04:34.462 "bdev_nvme_disable_controller", 00:04:34.462 "bdev_nvme_enable_controller", 00:04:34.462 "bdev_nvme_reset_controller", 00:04:34.462 "bdev_nvme_get_transport_statistics", 00:04:34.462 "bdev_nvme_apply_firmware", 00:04:34.462 "bdev_nvme_detach_controller", 00:04:34.462 "bdev_nvme_get_controllers", 00:04:34.462 "bdev_nvme_attach_controller", 00:04:34.462 "bdev_nvme_set_hotplug", 00:04:34.462 "bdev_nvme_set_options", 00:04:34.462 "bdev_passthru_delete", 00:04:34.462 "bdev_passthru_create", 00:04:34.462 "bdev_lvol_grow_lvstore", 00:04:34.462 "bdev_lvol_get_lvols", 00:04:34.462 "bdev_lvol_get_lvstores", 00:04:34.462 "bdev_lvol_delete", 00:04:34.462 "bdev_lvol_set_read_only", 00:04:34.462 "bdev_lvol_resize", 00:04:34.462 "bdev_lvol_decouple_parent", 00:04:34.462 "bdev_lvol_inflate", 00:04:34.462 "bdev_lvol_rename", 00:04:34.462 "bdev_lvol_clone_bdev", 00:04:34.462 "bdev_lvol_clone", 00:04:34.462 "bdev_lvol_snapshot", 00:04:34.462 "bdev_lvol_create", 00:04:34.462 "bdev_lvol_delete_lvstore", 00:04:34.462 "bdev_lvol_rename_lvstore", 00:04:34.462 "bdev_lvol_create_lvstore", 00:04:34.462 "bdev_raid_set_options", 00:04:34.462 "bdev_raid_remove_base_bdev", 00:04:34.462 "bdev_raid_add_base_bdev", 00:04:34.462 "bdev_raid_delete", 00:04:34.462 "bdev_raid_create", 00:04:34.462 "bdev_raid_get_bdevs", 00:04:34.462 "bdev_error_inject_error", 00:04:34.462 "bdev_error_delete", 00:04:34.462 "bdev_error_create", 00:04:34.462 "bdev_split_delete", 00:04:34.462 "bdev_split_create", 00:04:34.462 "bdev_delay_delete", 00:04:34.462 "bdev_delay_create", 00:04:34.462 "bdev_delay_update_latency", 00:04:34.462 "bdev_zone_block_delete", 00:04:34.462 "bdev_zone_block_create", 00:04:34.462 "blobfs_create", 00:04:34.462 "blobfs_detect", 00:04:34.462 "blobfs_set_cache_size", 00:04:34.462 "bdev_xnvme_delete", 00:04:34.462 "bdev_xnvme_create", 00:04:34.462 "bdev_aio_delete", 00:04:34.462 "bdev_aio_rescan", 00:04:34.462 "bdev_aio_create", 00:04:34.462 "bdev_ftl_set_property", 00:04:34.462 "bdev_ftl_get_properties", 00:04:34.462 "bdev_ftl_get_stats", 00:04:34.462 "bdev_ftl_unmap", 00:04:34.462 "bdev_ftl_unload", 00:04:34.462 "bdev_ftl_delete", 00:04:34.462 "bdev_ftl_load", 00:04:34.462 "bdev_ftl_create", 00:04:34.462 "bdev_virtio_attach_controller", 00:04:34.462 "bdev_virtio_scsi_get_devices", 00:04:34.462 "bdev_virtio_detach_controller", 00:04:34.462 "bdev_virtio_blk_set_hotplug", 00:04:34.462 "bdev_iscsi_delete", 00:04:34.462 "bdev_iscsi_create", 00:04:34.462 "bdev_iscsi_set_options", 00:04:34.462 "accel_error_inject_error", 00:04:34.462 "ioat_scan_accel_module", 00:04:34.462 "dsa_scan_accel_module", 00:04:34.462 "iaa_scan_accel_module", 00:04:34.462 "iscsi_set_options", 00:04:34.462 "iscsi_get_auth_groups", 00:04:34.462 "iscsi_auth_group_remove_secret", 00:04:34.462 "iscsi_auth_group_add_secret", 00:04:34.462 "iscsi_delete_auth_group", 00:04:34.462 "iscsi_create_auth_group", 00:04:34.462 "iscsi_set_discovery_auth", 00:04:34.462 "iscsi_get_options", 00:04:34.462 "iscsi_target_node_request_logout", 00:04:34.462 "iscsi_target_node_set_redirect", 00:04:34.462 "iscsi_target_node_set_auth", 00:04:34.462 "iscsi_target_node_add_lun", 00:04:34.462 "iscsi_get_connections", 00:04:34.462 "iscsi_portal_group_set_auth", 00:04:34.462 "iscsi_start_portal_group", 00:04:34.462 "iscsi_delete_portal_group", 00:04:34.462 "iscsi_create_portal_group", 00:04:34.462 "iscsi_get_portal_groups", 00:04:34.462 "iscsi_delete_target_node", 00:04:34.462 "iscsi_target_node_remove_pg_ig_maps", 00:04:34.462 "iscsi_target_node_add_pg_ig_maps", 00:04:34.462 "iscsi_create_target_node", 00:04:34.462 "iscsi_get_target_nodes", 00:04:34.462 "iscsi_delete_initiator_group", 00:04:34.462 "iscsi_initiator_group_remove_initiators", 00:04:34.462 "iscsi_initiator_group_add_initiators", 00:04:34.462 "iscsi_create_initiator_group", 00:04:34.462 "iscsi_get_initiator_groups", 00:04:34.462 "nvmf_set_crdt", 00:04:34.462 "nvmf_set_config", 00:04:34.462 "nvmf_set_max_subsystems", 00:04:34.462 "nvmf_subsystem_get_listeners", 00:04:34.462 "nvmf_subsystem_get_qpairs", 00:04:34.462 "nvmf_subsystem_get_controllers", 00:04:34.462 "nvmf_get_stats", 00:04:34.462 "nvmf_get_transports", 00:04:34.462 "nvmf_create_transport", 00:04:34.462 "nvmf_get_targets", 00:04:34.462 "nvmf_delete_target", 00:04:34.462 "nvmf_create_target", 00:04:34.462 "nvmf_subsystem_allow_any_host", 00:04:34.462 "nvmf_subsystem_remove_host", 00:04:34.462 "nvmf_subsystem_add_host", 00:04:34.462 "nvmf_subsystem_remove_ns", 00:04:34.462 "nvmf_subsystem_add_ns", 00:04:34.462 "nvmf_subsystem_listener_set_ana_state", 00:04:34.462 "nvmf_discovery_get_referrals", 00:04:34.462 "nvmf_discovery_remove_referral", 00:04:34.462 "nvmf_discovery_add_referral", 00:04:34.462 "nvmf_subsystem_remove_listener", 00:04:34.462 "nvmf_subsystem_add_listener", 00:04:34.462 "nvmf_delete_subsystem", 00:04:34.462 "nvmf_create_subsystem", 00:04:34.462 "nvmf_get_subsystems", 00:04:34.462 "env_dpdk_get_mem_stats", 00:04:34.462 "nbd_get_disks", 00:04:34.462 "nbd_stop_disk", 00:04:34.462 "nbd_start_disk", 00:04:34.462 "ublk_recover_disk", 00:04:34.462 "ublk_get_disks", 00:04:34.462 "ublk_stop_disk", 00:04:34.462 "ublk_start_disk", 00:04:34.462 "ublk_destroy_target", 00:04:34.462 "ublk_create_target", 00:04:34.462 "virtio_blk_create_transport", 00:04:34.462 "virtio_blk_get_transports", 00:04:34.462 "vhost_controller_set_coalescing", 00:04:34.462 "vhost_get_controllers", 00:04:34.462 "vhost_delete_controller", 00:04:34.462 "vhost_create_blk_controller", 00:04:34.462 "vhost_scsi_controller_remove_target", 00:04:34.462 "vhost_scsi_controller_add_target", 00:04:34.462 "vhost_start_scsi_controller", 00:04:34.462 "vhost_create_scsi_controller", 00:04:34.462 "thread_set_cpumask", 00:04:34.462 "framework_get_scheduler", 00:04:34.462 "framework_set_scheduler", 00:04:34.462 "framework_get_reactors", 00:04:34.462 "thread_get_io_channels", 00:04:34.462 "thread_get_pollers", 00:04:34.462 "thread_get_stats", 00:04:34.462 "framework_monitor_context_switch", 00:04:34.462 "spdk_kill_instance", 00:04:34.462 "log_enable_timestamps", 00:04:34.462 "log_get_flags", 00:04:34.462 "log_clear_flag", 00:04:34.462 "log_set_flag", 00:04:34.462 "log_get_level", 00:04:34.462 "log_set_level", 00:04:34.462 "log_get_print_level", 00:04:34.462 "log_set_print_level", 00:04:34.462 "framework_enable_cpumask_locks", 00:04:34.462 "framework_disable_cpumask_locks", 00:04:34.462 "framework_wait_init", 00:04:34.462 "framework_start_init", 00:04:34.462 "scsi_get_devices", 00:04:34.462 "bdev_get_histogram", 00:04:34.462 "bdev_enable_histogram", 00:04:34.462 "bdev_set_qos_limit", 00:04:34.462 "bdev_set_qd_sampling_period", 00:04:34.462 "bdev_get_bdevs", 00:04:34.462 "bdev_reset_iostat", 00:04:34.462 "bdev_get_iostat", 00:04:34.462 "bdev_examine", 00:04:34.462 "bdev_wait_for_examine", 00:04:34.462 "bdev_set_options", 00:04:34.462 "notify_get_notifications", 00:04:34.462 "notify_get_types", 00:04:34.462 "accel_get_stats", 00:04:34.462 "accel_set_options", 00:04:34.462 "accel_set_driver", 00:04:34.462 "accel_crypto_key_destroy", 00:04:34.462 "accel_crypto_keys_get", 00:04:34.462 "accel_crypto_key_create", 00:04:34.462 "accel_assign_opc", 00:04:34.462 "accel_get_module_info", 00:04:34.462 "accel_get_opc_assignments", 00:04:34.462 "vmd_rescan", 00:04:34.462 "vmd_remove_device", 00:04:34.462 "vmd_enable", 00:04:34.462 "sock_set_default_impl", 00:04:34.462 "sock_impl_set_options", 00:04:34.462 "sock_impl_get_options", 00:04:34.462 "iobuf_get_stats", 00:04:34.462 "iobuf_set_options", 00:04:34.462 "framework_get_pci_devices", 00:04:34.462 "framework_get_config", 00:04:34.462 "framework_get_subsystems", 00:04:34.462 "trace_get_info", 00:04:34.462 "trace_get_tpoint_group_mask", 00:04:34.462 "trace_disable_tpoint_group", 00:04:34.462 "trace_enable_tpoint_group", 00:04:34.462 "trace_clear_tpoint_mask", 00:04:34.462 "trace_set_tpoint_mask", 00:04:34.462 "spdk_get_version", 00:04:34.462 "rpc_get_methods" 00:04:34.462 ] 00:04:34.462 13:08:48 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:34.462 13:08:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.462 13:08:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.462 13:08:48 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:34.462 13:08:48 -- spdkcli/tcp.sh@38 -- # killprocess 56662 00:04:34.462 13:08:48 -- common/autotest_common.sh@936 -- # '[' -z 56662 ']' 00:04:34.462 13:08:48 -- common/autotest_common.sh@940 -- # kill -0 56662 00:04:34.462 13:08:48 -- common/autotest_common.sh@941 -- # uname 00:04:34.462 13:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:34.462 13:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56662 00:04:34.462 13:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:34.462 13:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:34.462 killing process with pid 56662 00:04:34.462 13:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56662' 00:04:34.462 13:08:49 -- common/autotest_common.sh@955 -- # kill 56662 00:04:34.462 13:08:49 -- common/autotest_common.sh@960 -- # wait 56662 00:04:35.837 00:04:35.837 real 0m2.422s 00:04:35.837 user 0m4.200s 00:04:35.837 sys 0m0.391s 00:04:35.837 13:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.837 ************************************ 00:04:35.837 END TEST spdkcli_tcp 00:04:35.837 ************************************ 00:04:35.837 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.837 13:08:50 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:35.837 13:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.837 13:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.837 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.837 ************************************ 00:04:35.837 START TEST dpdk_mem_utility 00:04:35.837 ************************************ 00:04:35.837 13:08:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:35.837 * Looking for test storage... 00:04:35.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:35.837 13:08:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.837 13:08:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.837 13:08:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.837 13:08:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.837 13:08:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.837 13:08:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.837 13:08:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.837 13:08:50 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.837 13:08:50 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.837 13:08:50 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.837 13:08:50 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.837 13:08:50 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.837 13:08:50 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.837 13:08:50 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.837 13:08:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.837 13:08:50 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.837 13:08:50 -- scripts/common.sh@344 -- # : 1 00:04:35.837 13:08:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.837 13:08:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.837 13:08:50 -- scripts/common.sh@364 -- # decimal 1 00:04:35.837 13:08:50 -- scripts/common.sh@352 -- # local d=1 00:04:35.837 13:08:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.837 13:08:50 -- scripts/common.sh@354 -- # echo 1 00:04:35.837 13:08:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.837 13:08:50 -- scripts/common.sh@365 -- # decimal 2 00:04:35.837 13:08:50 -- scripts/common.sh@352 -- # local d=2 00:04:35.837 13:08:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.837 13:08:50 -- scripts/common.sh@354 -- # echo 2 00:04:35.837 13:08:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.837 13:08:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.837 13:08:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.837 13:08:50 -- scripts/common.sh@367 -- # return 0 00:04:35.837 13:08:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.837 13:08:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.837 --rc genhtml_branch_coverage=1 00:04:35.837 --rc genhtml_function_coverage=1 00:04:35.837 --rc genhtml_legend=1 00:04:35.837 --rc geninfo_all_blocks=1 00:04:35.837 --rc geninfo_unexecuted_blocks=1 00:04:35.837 00:04:35.837 ' 00:04:35.837 13:08:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.837 --rc genhtml_branch_coverage=1 00:04:35.837 --rc genhtml_function_coverage=1 00:04:35.837 --rc genhtml_legend=1 00:04:35.837 --rc geninfo_all_blocks=1 00:04:35.837 --rc geninfo_unexecuted_blocks=1 00:04:35.837 00:04:35.837 ' 00:04:35.837 13:08:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.837 --rc genhtml_branch_coverage=1 00:04:35.837 --rc genhtml_function_coverage=1 00:04:35.837 --rc genhtml_legend=1 00:04:35.837 --rc geninfo_all_blocks=1 00:04:35.837 --rc geninfo_unexecuted_blocks=1 00:04:35.837 00:04:35.837 ' 00:04:35.837 13:08:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.837 --rc genhtml_branch_coverage=1 00:04:35.837 --rc genhtml_function_coverage=1 00:04:35.837 --rc genhtml_legend=1 00:04:35.837 --rc geninfo_all_blocks=1 00:04:35.837 --rc geninfo_unexecuted_blocks=1 00:04:35.837 00:04:35.837 ' 00:04:35.837 13:08:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:35.837 13:08:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56771 00:04:35.837 13:08:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56771 00:04:35.837 13:08:50 -- common/autotest_common.sh@829 -- # '[' -z 56771 ']' 00:04:35.837 13:08:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.837 13:08:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.837 13:08:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.838 13:08:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.838 13:08:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.838 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 [2024-12-16 13:08:50.429913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:36.096 [2024-12-16 13:08:50.430022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56771 ] 00:04:36.096 [2024-12-16 13:08:50.570691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.354 [2024-12-16 13:08:50.707647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.354 [2024-12-16 13:08:50.707798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.921 13:08:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.921 13:08:51 -- common/autotest_common.sh@862 -- # return 0 00:04:36.921 13:08:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:36.921 13:08:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:36.921 13:08:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.921 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.921 { 00:04:36.921 "filename": "/tmp/spdk_mem_dump.txt" 00:04:36.921 } 00:04:36.921 13:08:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.921 13:08:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:36.921 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:36.921 1 heaps totaling size 820.000000 MiB 00:04:36.921 size: 820.000000 MiB heap id: 0 00:04:36.921 end heaps---------- 00:04:36.921 8 mempools totaling size 598.116089 MiB 00:04:36.921 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:36.921 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:36.921 size: 84.521057 MiB name: bdev_io_56771 00:04:36.921 size: 51.011292 MiB name: evtpool_56771 00:04:36.921 size: 50.003479 MiB name: msgpool_56771 00:04:36.921 size: 21.763794 MiB name: PDU_Pool 00:04:36.921 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:36.921 size: 0.026123 MiB name: Session_Pool 00:04:36.921 end mempools------- 00:04:36.921 6 memzones totaling size 4.142822 MiB 00:04:36.921 size: 1.000366 MiB name: RG_ring_0_56771 00:04:36.921 size: 1.000366 MiB name: RG_ring_1_56771 00:04:36.921 size: 1.000366 MiB name: RG_ring_4_56771 00:04:36.921 size: 1.000366 MiB name: RG_ring_5_56771 00:04:36.921 size: 0.125366 MiB name: RG_ring_2_56771 00:04:36.921 size: 0.015991 MiB name: RG_ring_3_56771 00:04:36.921 end memzones------- 00:04:36.921 13:08:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:36.921 heap id: 0 total size: 820.000000 MiB number of busy elements: 309 number of free elements: 18 00:04:36.921 list of free elements. size: 18.449341 MiB 00:04:36.921 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:36.921 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:36.921 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:36.921 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:36.921 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:36.921 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:36.921 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:36.921 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:36.921 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:36.921 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:36.921 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:36.921 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:36.921 element at address: 0x20001b000000 with size: 0.562927 MiB 00:04:36.921 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:36.921 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:36.921 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:36.921 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:36.921 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:36.921 list of standard malloc elements. size: 199.286255 MiB 00:04:36.921 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:36.921 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:36.921 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:36.921 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:36.921 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:36.921 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:36.921 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:36.921 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:36.921 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:36.921 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:36.921 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:36.921 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:36.921 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:36.922 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:36.923 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:36.923 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:36.923 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:36.923 list of memzone associated elements. size: 602.264404 MiB 00:04:36.923 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:36.923 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:36.923 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:36.923 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:36.923 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:36.923 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56771_0 00:04:36.923 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:36.923 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56771_0 00:04:36.923 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:36.923 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56771_0 00:04:36.923 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:36.923 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:36.923 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:36.923 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:36.923 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:36.923 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56771 00:04:36.923 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:36.923 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56771 00:04:36.923 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:36.923 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56771 00:04:36.923 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:36.923 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:36.923 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:36.923 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:36.923 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:36.923 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:36.923 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:36.923 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:36.923 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:36.923 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56771 00:04:36.923 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:36.923 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56771 00:04:36.923 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:36.923 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56771 00:04:36.923 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:36.923 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56771 00:04:36.923 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:36.923 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56771 00:04:36.923 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:36.923 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:36.923 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:36.924 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:36.924 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:36.924 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:36.924 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:36.924 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56771 00:04:36.924 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:36.924 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:36.924 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:36.924 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:36.924 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:36.924 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56771 00:04:36.924 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:36.924 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:36.924 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:36.924 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56771 00:04:36.924 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:36.924 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56771 00:04:36.924 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:36.924 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:36.924 13:08:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:36.924 13:08:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56771 00:04:36.924 13:08:51 -- common/autotest_common.sh@936 -- # '[' -z 56771 ']' 00:04:36.924 13:08:51 -- common/autotest_common.sh@940 -- # kill -0 56771 00:04:36.924 13:08:51 -- common/autotest_common.sh@941 -- # uname 00:04:36.924 13:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.924 13:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56771 00:04:36.924 13:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.924 13:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.924 killing process with pid 56771 00:04:36.924 13:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56771' 00:04:36.924 13:08:51 -- common/autotest_common.sh@955 -- # kill 56771 00:04:36.924 13:08:51 -- common/autotest_common.sh@960 -- # wait 56771 00:04:38.336 00:04:38.336 real 0m2.269s 00:04:38.336 user 0m2.238s 00:04:38.336 sys 0m0.368s 00:04:38.336 13:08:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.336 ************************************ 00:04:38.336 END TEST dpdk_mem_utility 00:04:38.336 ************************************ 00:04:38.336 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.336 13:08:52 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.336 13:08:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.336 13:08:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.336 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.336 ************************************ 00:04:38.336 START TEST event 00:04:38.336 ************************************ 00:04:38.336 13:08:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.336 * Looking for test storage... 00:04:38.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:38.336 13:08:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.336 13:08:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.336 13:08:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.336 13:08:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.336 13:08:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.336 13:08:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.336 13:08:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.336 13:08:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.336 13:08:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.336 13:08:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.336 13:08:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.336 13:08:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.336 13:08:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.336 13:08:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.336 13:08:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.336 13:08:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.336 13:08:52 -- scripts/common.sh@344 -- # : 1 00:04:38.336 13:08:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.336 13:08:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.336 13:08:52 -- scripts/common.sh@364 -- # decimal 1 00:04:38.336 13:08:52 -- scripts/common.sh@352 -- # local d=1 00:04:38.336 13:08:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.336 13:08:52 -- scripts/common.sh@354 -- # echo 1 00:04:38.337 13:08:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.337 13:08:52 -- scripts/common.sh@365 -- # decimal 2 00:04:38.337 13:08:52 -- scripts/common.sh@352 -- # local d=2 00:04:38.337 13:08:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.337 13:08:52 -- scripts/common.sh@354 -- # echo 2 00:04:38.337 13:08:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.337 13:08:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.337 13:08:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.337 13:08:52 -- scripts/common.sh@367 -- # return 0 00:04:38.337 13:08:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.337 13:08:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.337 --rc genhtml_branch_coverage=1 00:04:38.337 --rc genhtml_function_coverage=1 00:04:38.337 --rc genhtml_legend=1 00:04:38.337 --rc geninfo_all_blocks=1 00:04:38.337 --rc geninfo_unexecuted_blocks=1 00:04:38.337 00:04:38.337 ' 00:04:38.337 13:08:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.337 --rc genhtml_branch_coverage=1 00:04:38.337 --rc genhtml_function_coverage=1 00:04:38.337 --rc genhtml_legend=1 00:04:38.337 --rc geninfo_all_blocks=1 00:04:38.337 --rc geninfo_unexecuted_blocks=1 00:04:38.337 00:04:38.337 ' 00:04:38.337 13:08:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.337 --rc genhtml_branch_coverage=1 00:04:38.337 --rc genhtml_function_coverage=1 00:04:38.337 --rc genhtml_legend=1 00:04:38.337 --rc geninfo_all_blocks=1 00:04:38.337 --rc geninfo_unexecuted_blocks=1 00:04:38.337 00:04:38.337 ' 00:04:38.337 13:08:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.337 --rc genhtml_branch_coverage=1 00:04:38.337 --rc genhtml_function_coverage=1 00:04:38.337 --rc genhtml_legend=1 00:04:38.337 --rc geninfo_all_blocks=1 00:04:38.337 --rc geninfo_unexecuted_blocks=1 00:04:38.337 00:04:38.337 ' 00:04:38.337 13:08:52 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:38.337 13:08:52 -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.337 13:08:52 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.337 13:08:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:38.337 13:08:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.337 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.337 ************************************ 00:04:38.337 START TEST event_perf 00:04:38.337 ************************************ 00:04:38.337 13:08:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.337 Running I/O for 1 seconds...[2024-12-16 13:08:52.741290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.337 [2024-12-16 13:08:52.741467] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56857 ] 00:04:38.337 [2024-12-16 13:08:52.889882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.596 [2024-12-16 13:08:53.030696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.596 [2024-12-16 13:08:53.031131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.596 [2024-12-16 13:08:53.030798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.596 [2024-12-16 13:08:53.031157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.970 Running I/O for 1 seconds... 00:04:39.970 lcore 0: 203778 00:04:39.970 lcore 1: 203780 00:04:39.970 lcore 2: 203778 00:04:39.970 lcore 3: 203778 00:04:39.970 done. 00:04:39.970 00:04:39.970 real 0m1.529s 00:04:39.970 user 0m4.334s 00:04:39.970 sys 0m0.077s 00:04:39.970 13:08:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.970 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.970 ************************************ 00:04:39.970 END TEST event_perf 00:04:39.970 ************************************ 00:04:39.970 13:08:54 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:39.970 13:08:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:39.971 13:08:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.971 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.971 ************************************ 00:04:39.971 START TEST event_reactor 00:04:39.971 ************************************ 00:04:39.971 13:08:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:39.971 [2024-12-16 13:08:54.327819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:39.971 [2024-12-16 13:08:54.328029] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56891 ] 00:04:39.971 [2024-12-16 13:08:54.473799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.229 [2024-12-16 13:08:54.611111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.603 test_start 00:04:41.603 oneshot 00:04:41.603 tick 100 00:04:41.603 tick 100 00:04:41.603 tick 250 00:04:41.603 tick 100 00:04:41.603 tick 100 00:04:41.603 tick 100 00:04:41.603 tick 250 00:04:41.603 tick 500 00:04:41.603 tick 100 00:04:41.603 tick 100 00:04:41.603 tick 250 00:04:41.603 tick 100 00:04:41.603 tick 100 00:04:41.603 test_end 00:04:41.603 00:04:41.603 real 0m1.517s 00:04:41.603 user 0m1.344s 00:04:41.603 sys 0m0.065s 00:04:41.603 13:08:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.603 ************************************ 00:04:41.603 END TEST event_reactor 00:04:41.603 ************************************ 00:04:41.603 13:08:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.603 13:08:55 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.603 13:08:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:41.603 13:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.603 13:08:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.603 ************************************ 00:04:41.603 START TEST event_reactor_perf 00:04:41.603 ************************************ 00:04:41.603 13:08:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.603 [2024-12-16 13:08:55.901249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.603 [2024-12-16 13:08:55.901330] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56933 ] 00:04:41.603 [2024-12-16 13:08:56.041987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.862 [2024-12-16 13:08:56.188364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.869 test_start 00:04:42.869 test_end 00:04:42.869 Performance: 408639 events per second 00:04:42.869 00:04:42.869 real 0m1.522s 00:04:42.869 user 0m1.351s 00:04:42.869 sys 0m0.063s 00:04:42.869 13:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.869 ************************************ 00:04:42.869 END TEST event_reactor_perf 00:04:42.869 ************************************ 00:04:42.869 13:08:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.129 13:08:57 -- event/event.sh@49 -- # uname -s 00:04:43.129 13:08:57 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.129 13:08:57 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.129 13:08:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.129 13:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.129 13:08:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.129 ************************************ 00:04:43.129 START TEST event_scheduler 00:04:43.129 ************************************ 00:04:43.129 13:08:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.129 * Looking for test storage... 00:04:43.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:43.129 13:08:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.129 13:08:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.129 13:08:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.129 13:08:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.129 13:08:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.129 13:08:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.129 13:08:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.129 13:08:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.129 13:08:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.129 13:08:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.129 13:08:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.129 13:08:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.129 13:08:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.129 13:08:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.129 13:08:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.129 13:08:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.129 13:08:57 -- scripts/common.sh@344 -- # : 1 00:04:43.129 13:08:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.129 13:08:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.129 13:08:57 -- scripts/common.sh@364 -- # decimal 1 00:04:43.129 13:08:57 -- scripts/common.sh@352 -- # local d=1 00:04:43.129 13:08:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.129 13:08:57 -- scripts/common.sh@354 -- # echo 1 00:04:43.129 13:08:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.129 13:08:57 -- scripts/common.sh@365 -- # decimal 2 00:04:43.129 13:08:57 -- scripts/common.sh@352 -- # local d=2 00:04:43.129 13:08:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.129 13:08:57 -- scripts/common.sh@354 -- # echo 2 00:04:43.129 13:08:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.129 13:08:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.129 13:08:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.129 13:08:57 -- scripts/common.sh@367 -- # return 0 00:04:43.129 13:08:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.129 13:08:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.129 --rc genhtml_branch_coverage=1 00:04:43.129 --rc genhtml_function_coverage=1 00:04:43.129 --rc genhtml_legend=1 00:04:43.129 --rc geninfo_all_blocks=1 00:04:43.129 --rc geninfo_unexecuted_blocks=1 00:04:43.129 00:04:43.129 ' 00:04:43.129 13:08:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.129 --rc genhtml_branch_coverage=1 00:04:43.129 --rc genhtml_function_coverage=1 00:04:43.129 --rc genhtml_legend=1 00:04:43.129 --rc geninfo_all_blocks=1 00:04:43.129 --rc geninfo_unexecuted_blocks=1 00:04:43.129 00:04:43.129 ' 00:04:43.129 13:08:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.129 --rc genhtml_branch_coverage=1 00:04:43.129 --rc genhtml_function_coverage=1 00:04:43.129 --rc genhtml_legend=1 00:04:43.129 --rc geninfo_all_blocks=1 00:04:43.129 --rc geninfo_unexecuted_blocks=1 00:04:43.129 00:04:43.129 ' 00:04:43.129 13:08:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.129 --rc genhtml_branch_coverage=1 00:04:43.129 --rc genhtml_function_coverage=1 00:04:43.129 --rc genhtml_legend=1 00:04:43.129 --rc geninfo_all_blocks=1 00:04:43.129 --rc geninfo_unexecuted_blocks=1 00:04:43.129 00:04:43.129 ' 00:04:43.129 13:08:57 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.129 13:08:57 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57008 00:04:43.129 13:08:57 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.129 13:08:57 -- scheduler/scheduler.sh@37 -- # waitforlisten 57008 00:04:43.129 13:08:57 -- common/autotest_common.sh@829 -- # '[' -z 57008 ']' 00:04:43.129 13:08:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.129 13:08:57 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.129 13:08:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.129 13:08:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.129 13:08:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.129 13:08:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.129 [2024-12-16 13:08:57.647883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.129 [2024-12-16 13:08:57.648153] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57008 ] 00:04:43.389 [2024-12-16 13:08:57.796699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.646 [2024-12-16 13:08:57.975218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.647 [2024-12-16 13:08:57.975519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.647 [2024-12-16 13:08:57.975766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.647 [2024-12-16 13:08:57.975948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.927 13:08:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.927 13:08:58 -- common/autotest_common.sh@862 -- # return 0 00:04:43.927 13:08:58 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:43.927 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.927 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.927 POWER: Env isn't set yet! 00:04:43.927 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:43.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.927 POWER: Cannot set governor of lcore 0 to userspace 00:04:43.927 POWER: Attempting to initialise PSTAT power management... 00:04:43.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.927 POWER: Cannot set governor of lcore 0 to performance 00:04:43.927 POWER: Attempting to initialise AMD PSTATE power management... 00:04:43.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.927 POWER: Cannot set governor of lcore 0 to userspace 00:04:43.927 POWER: Attempting to initialise CPPC power management... 00:04:43.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.927 POWER: Cannot set governor of lcore 0 to userspace 00:04:43.927 POWER: Attempting to initialise VM power management... 00:04:43.927 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:43.927 POWER: Unable to set Power Management Environment for lcore 0 00:04:43.927 [2024-12-16 13:08:58.437008] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:43.927 [2024-12-16 13:08:58.437022] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:43.927 [2024-12-16 13:08:58.437032] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:43.927 [2024-12-16 13:08:58.437046] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:43.927 [2024-12-16 13:08:58.437057] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:43.927 [2024-12-16 13:08:58.437064] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:43.927 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.927 13:08:58 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:43.927 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.927 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 [2024-12-16 13:08:58.658215] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.196 13:08:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.196 13:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 ************************************ 00:04:44.196 START TEST scheduler_create_thread 00:04:44.196 ************************************ 00:04:44.196 13:08:58 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 2 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 3 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 4 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 5 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 6 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 7 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 8 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 9 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 10 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.196 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.196 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.196 13:08:58 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:44.196 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.197 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.457 13:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.457 13:08:58 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.457 13:08:58 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.457 13:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.457 13:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:45.395 ************************************ 00:04:45.395 END TEST scheduler_create_thread 00:04:45.395 ************************************ 00:04:45.395 13:08:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.395 00:04:45.395 real 0m1.174s 00:04:45.395 user 0m0.014s 00:04:45.395 sys 0m0.005s 00:04:45.395 13:08:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.395 13:08:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.395 13:08:59 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.395 13:08:59 -- scheduler/scheduler.sh@46 -- # killprocess 57008 00:04:45.395 13:08:59 -- common/autotest_common.sh@936 -- # '[' -z 57008 ']' 00:04:45.395 13:08:59 -- common/autotest_common.sh@940 -- # kill -0 57008 00:04:45.395 13:08:59 -- common/autotest_common.sh@941 -- # uname 00:04:45.395 13:08:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.395 13:08:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57008 00:04:45.395 killing process with pid 57008 00:04:45.395 13:08:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:45.395 13:08:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:45.395 13:08:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57008' 00:04:45.395 13:08:59 -- common/autotest_common.sh@955 -- # kill 57008 00:04:45.395 13:08:59 -- common/autotest_common.sh@960 -- # wait 57008 00:04:45.962 [2024-12-16 13:09:00.323264] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.528 00:04:46.528 real 0m3.506s 00:04:46.528 user 0m5.298s 00:04:46.528 sys 0m0.346s 00:04:46.528 13:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.528 13:09:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.528 ************************************ 00:04:46.528 END TEST event_scheduler 00:04:46.528 ************************************ 00:04:46.528 13:09:01 -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.528 13:09:01 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.528 13:09:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.528 13:09:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.528 13:09:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.528 ************************************ 00:04:46.528 START TEST app_repeat 00:04:46.528 ************************************ 00:04:46.528 13:09:01 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:46.528 13:09:01 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.528 13:09:01 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.528 13:09:01 -- event/event.sh@13 -- # local nbd_list 00:04:46.528 13:09:01 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.528 13:09:01 -- event/event.sh@14 -- # local bdev_list 00:04:46.528 13:09:01 -- event/event.sh@15 -- # local repeat_times=4 00:04:46.528 13:09:01 -- event/event.sh@17 -- # modprobe nbd 00:04:46.528 Process app_repeat pid: 57092 00:04:46.528 spdk_app_start Round 0 00:04:46.528 13:09:01 -- event/event.sh@19 -- # repeat_pid=57092 00:04:46.528 13:09:01 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.528 13:09:01 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57092' 00:04:46.528 13:09:01 -- event/event.sh@23 -- # for i in {0..2} 00:04:46.528 13:09:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.528 13:09:01 -- event/event.sh@25 -- # waitforlisten 57092 /var/tmp/spdk-nbd.sock 00:04:46.528 13:09:01 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.528 13:09:01 -- common/autotest_common.sh@829 -- # '[' -z 57092 ']' 00:04:46.528 13:09:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.528 13:09:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.528 13:09:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.528 13:09:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.528 13:09:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.528 [2024-12-16 13:09:01.063464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.528 [2024-12-16 13:09:01.063568] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57092 ] 00:04:46.786 [2024-12-16 13:09:01.210151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.786 [2024-12-16 13:09:01.349601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.786 [2024-12-16 13:09:01.349641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.349 13:09:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.349 13:09:01 -- common/autotest_common.sh@862 -- # return 0 00:04:47.349 13:09:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.606 Malloc0 00:04:47.606 13:09:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.862 Malloc1 00:04:47.862 13:09:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@12 -- # local i 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.862 13:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.120 /dev/nbd0 00:04:48.120 13:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.120 13:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.120 13:09:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:48.120 13:09:02 -- common/autotest_common.sh@867 -- # local i 00:04:48.120 13:09:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:48.120 13:09:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:48.120 13:09:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:48.120 13:09:02 -- common/autotest_common.sh@871 -- # break 00:04:48.120 13:09:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:48.120 13:09:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:48.120 13:09:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.120 1+0 records in 00:04:48.120 1+0 records out 00:04:48.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311353 s, 13.2 MB/s 00:04:48.120 13:09:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.120 13:09:02 -- common/autotest_common.sh@884 -- # size=4096 00:04:48.120 13:09:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.120 13:09:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:48.120 13:09:02 -- common/autotest_common.sh@887 -- # return 0 00:04:48.120 13:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.120 13:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.120 13:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.377 /dev/nbd1 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.377 13:09:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:48.377 13:09:02 -- common/autotest_common.sh@867 -- # local i 00:04:48.377 13:09:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:48.377 13:09:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:48.377 13:09:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:48.377 13:09:02 -- common/autotest_common.sh@871 -- # break 00:04:48.377 13:09:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:48.377 13:09:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:48.377 13:09:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.377 1+0 records in 00:04:48.377 1+0 records out 00:04:48.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000127251 s, 32.2 MB/s 00:04:48.377 13:09:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.377 13:09:02 -- common/autotest_common.sh@884 -- # size=4096 00:04:48.377 13:09:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.377 13:09:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:48.377 13:09:02 -- common/autotest_common.sh@887 -- # return 0 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.377 { 00:04:48.377 "nbd_device": "/dev/nbd0", 00:04:48.377 "bdev_name": "Malloc0" 00:04:48.377 }, 00:04:48.377 { 00:04:48.377 "nbd_device": "/dev/nbd1", 00:04:48.377 "bdev_name": "Malloc1" 00:04:48.377 } 00:04:48.377 ]' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.377 { 00:04:48.377 "nbd_device": "/dev/nbd0", 00:04:48.377 "bdev_name": "Malloc0" 00:04:48.377 }, 00:04:48.377 { 00:04:48.377 "nbd_device": "/dev/nbd1", 00:04:48.377 "bdev_name": "Malloc1" 00:04:48.377 } 00:04:48.377 ]' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.377 /dev/nbd1' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.377 /dev/nbd1' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.377 256+0 records in 00:04:48.377 256+0 records out 00:04:48.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775251 s, 135 MB/s 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.377 13:09:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.634 256+0 records in 00:04:48.634 256+0 records out 00:04:48.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175409 s, 59.8 MB/s 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.634 256+0 records in 00:04:48.634 256+0 records out 00:04:48.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162455 s, 64.5 MB/s 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@51 -- # local i 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.634 13:09:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.634 13:09:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.634 13:09:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@41 -- # break 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.635 13:09:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@41 -- # break 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.892 13:09:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.149 13:09:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@65 -- # true 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.150 13:09:03 -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.150 13:09:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.407 13:09:03 -- event/event.sh@35 -- # sleep 3 00:04:49.972 [2024-12-16 13:09:04.517663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.229 [2024-12-16 13:09:04.646414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.229 [2024-12-16 13:09:04.646585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.229 [2024-12-16 13:09:04.750920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.229 [2024-12-16 13:09:04.750961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.770 spdk_app_start Round 1 00:04:52.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.770 13:09:06 -- event/event.sh@23 -- # for i in {0..2} 00:04:52.770 13:09:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.770 13:09:06 -- event/event.sh@25 -- # waitforlisten 57092 /var/tmp/spdk-nbd.sock 00:04:52.770 13:09:06 -- common/autotest_common.sh@829 -- # '[' -z 57092 ']' 00:04:52.770 13:09:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.770 13:09:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.770 13:09:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.770 13:09:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.770 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:04:52.770 13:09:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.770 13:09:07 -- common/autotest_common.sh@862 -- # return 0 00:04:52.770 13:09:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.770 Malloc0 00:04:52.770 13:09:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.028 Malloc1 00:04:53.028 13:09:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@12 -- # local i 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.028 13:09:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.285 /dev/nbd0 00:04:53.285 13:09:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.285 13:09:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.285 13:09:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.285 13:09:07 -- common/autotest_common.sh@867 -- # local i 00:04:53.285 13:09:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.285 13:09:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.285 13:09:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.285 13:09:07 -- common/autotest_common.sh@871 -- # break 00:04:53.285 13:09:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.285 13:09:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.285 13:09:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.285 1+0 records in 00:04:53.285 1+0 records out 00:04:53.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286469 s, 14.3 MB/s 00:04:53.285 13:09:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.285 13:09:07 -- common/autotest_common.sh@884 -- # size=4096 00:04:53.285 13:09:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.285 13:09:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.285 13:09:07 -- common/autotest_common.sh@887 -- # return 0 00:04:53.285 13:09:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.285 13:09:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.285 13:09:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.543 /dev/nbd1 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.543 13:09:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.543 13:09:07 -- common/autotest_common.sh@867 -- # local i 00:04:53.543 13:09:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.543 13:09:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.543 13:09:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.543 13:09:07 -- common/autotest_common.sh@871 -- # break 00:04:53.543 13:09:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.543 13:09:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.543 13:09:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.543 1+0 records in 00:04:53.543 1+0 records out 00:04:53.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278364 s, 14.7 MB/s 00:04:53.543 13:09:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.543 13:09:07 -- common/autotest_common.sh@884 -- # size=4096 00:04:53.543 13:09:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.543 13:09:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.543 13:09:07 -- common/autotest_common.sh@887 -- # return 0 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.543 13:09:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.801 { 00:04:53.801 "nbd_device": "/dev/nbd0", 00:04:53.801 "bdev_name": "Malloc0" 00:04:53.801 }, 00:04:53.801 { 00:04:53.801 "nbd_device": "/dev/nbd1", 00:04:53.801 "bdev_name": "Malloc1" 00:04:53.801 } 00:04:53.801 ]' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.801 { 00:04:53.801 "nbd_device": "/dev/nbd0", 00:04:53.801 "bdev_name": "Malloc0" 00:04:53.801 }, 00:04:53.801 { 00:04:53.801 "nbd_device": "/dev/nbd1", 00:04:53.801 "bdev_name": "Malloc1" 00:04:53.801 } 00:04:53.801 ]' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.801 /dev/nbd1' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.801 /dev/nbd1' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.801 256+0 records in 00:04:53.801 256+0 records out 00:04:53.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697559 s, 150 MB/s 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.801 256+0 records in 00:04:53.801 256+0 records out 00:04:53.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164629 s, 63.7 MB/s 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.801 256+0 records in 00:04:53.801 256+0 records out 00:04:53.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200351 s, 52.3 MB/s 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@51 -- # local i 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.801 13:09:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@41 -- # break 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.059 13:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.317 13:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.317 13:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@41 -- # break 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@65 -- # true 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.318 13:09:08 -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.318 13:09:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.576 13:09:09 -- event/event.sh@35 -- # sleep 3 00:04:55.509 [2024-12-16 13:09:09.758545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.509 [2024-12-16 13:09:09.890009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.509 [2024-12-16 13:09:09.890165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.509 [2024-12-16 13:09:09.994000] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.509 [2024-12-16 13:09:09.994055] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.036 spdk_app_start Round 2 00:04:58.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.036 13:09:12 -- event/event.sh@23 -- # for i in {0..2} 00:04:58.036 13:09:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:58.036 13:09:12 -- event/event.sh@25 -- # waitforlisten 57092 /var/tmp/spdk-nbd.sock 00:04:58.036 13:09:12 -- common/autotest_common.sh@829 -- # '[' -z 57092 ']' 00:04:58.036 13:09:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.036 13:09:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.036 13:09:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.036 13:09:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.036 13:09:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.036 13:09:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.036 13:09:12 -- common/autotest_common.sh@862 -- # return 0 00:04:58.036 13:09:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.036 Malloc0 00:04:58.036 13:09:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.293 Malloc1 00:04:58.293 13:09:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@12 -- # local i 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.293 13:09:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.294 13:09:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.551 /dev/nbd0 00:04:58.551 13:09:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.551 13:09:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.551 13:09:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:58.551 13:09:12 -- common/autotest_common.sh@867 -- # local i 00:04:58.551 13:09:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:58.551 13:09:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:58.552 13:09:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:58.552 13:09:12 -- common/autotest_common.sh@871 -- # break 00:04:58.552 13:09:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:58.552 13:09:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:58.552 13:09:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.552 1+0 records in 00:04:58.552 1+0 records out 00:04:58.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018116 s, 22.6 MB/s 00:04:58.552 13:09:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.552 13:09:12 -- common/autotest_common.sh@884 -- # size=4096 00:04:58.552 13:09:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.552 13:09:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:58.552 13:09:12 -- common/autotest_common.sh@887 -- # return 0 00:04:58.552 13:09:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.552 13:09:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.552 13:09:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.809 /dev/nbd1 00:04:58.809 13:09:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.809 13:09:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.809 13:09:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:58.809 13:09:13 -- common/autotest_common.sh@867 -- # local i 00:04:58.809 13:09:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:58.809 13:09:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:58.809 13:09:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:58.809 13:09:13 -- common/autotest_common.sh@871 -- # break 00:04:58.809 13:09:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:58.810 13:09:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:58.810 13:09:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.810 1+0 records in 00:04:58.810 1+0 records out 00:04:58.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191651 s, 21.4 MB/s 00:04:58.810 13:09:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.810 13:09:13 -- common/autotest_common.sh@884 -- # size=4096 00:04:58.810 13:09:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.810 13:09:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:58.810 13:09:13 -- common/autotest_common.sh@887 -- # return 0 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.810 { 00:04:58.810 "nbd_device": "/dev/nbd0", 00:04:58.810 "bdev_name": "Malloc0" 00:04:58.810 }, 00:04:58.810 { 00:04:58.810 "nbd_device": "/dev/nbd1", 00:04:58.810 "bdev_name": "Malloc1" 00:04:58.810 } 00:04:58.810 ]' 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.810 { 00:04:58.810 "nbd_device": "/dev/nbd0", 00:04:58.810 "bdev_name": "Malloc0" 00:04:58.810 }, 00:04:58.810 { 00:04:58.810 "nbd_device": "/dev/nbd1", 00:04:58.810 "bdev_name": "Malloc1" 00:04:58.810 } 00:04:58.810 ]' 00:04:58.810 13:09:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.067 /dev/nbd1' 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.067 /dev/nbd1' 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.067 256+0 records in 00:04:59.067 256+0 records out 00:04:59.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726489 s, 144 MB/s 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.067 13:09:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.067 256+0 records in 00:04:59.068 256+0 records out 00:04:59.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149169 s, 70.3 MB/s 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.068 256+0 records in 00:04:59.068 256+0 records out 00:04:59.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161515 s, 64.9 MB/s 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@51 -- # local i 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.068 13:09:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@41 -- # break 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@41 -- # break 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.326 13:09:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@65 -- # true 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.584 13:09:14 -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.584 13:09:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.841 13:09:14 -- event/event.sh@35 -- # sleep 3 00:05:00.407 [2024-12-16 13:09:14.961521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.673 [2024-12-16 13:09:15.091574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.673 [2024-12-16 13:09:15.091576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.673 [2024-12-16 13:09:15.195545] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.673 [2024-12-16 13:09:15.195730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.202 13:09:17 -- event/event.sh@38 -- # waitforlisten 57092 /var/tmp/spdk-nbd.sock 00:05:03.202 13:09:17 -- common/autotest_common.sh@829 -- # '[' -z 57092 ']' 00:05:03.202 13:09:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.202 13:09:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.202 13:09:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.202 13:09:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.202 13:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 13:09:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.202 13:09:17 -- common/autotest_common.sh@862 -- # return 0 00:05:03.202 13:09:17 -- event/event.sh@39 -- # killprocess 57092 00:05:03.202 13:09:17 -- common/autotest_common.sh@936 -- # '[' -z 57092 ']' 00:05:03.202 13:09:17 -- common/autotest_common.sh@940 -- # kill -0 57092 00:05:03.202 13:09:17 -- common/autotest_common.sh@941 -- # uname 00:05:03.202 13:09:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.202 13:09:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57092 00:05:03.202 killing process with pid 57092 00:05:03.202 13:09:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.202 13:09:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.202 13:09:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57092' 00:05:03.202 13:09:17 -- common/autotest_common.sh@955 -- # kill 57092 00:05:03.202 13:09:17 -- common/autotest_common.sh@960 -- # wait 57092 00:05:03.769 spdk_app_start is called in Round 0. 00:05:03.769 Shutdown signal received, stop current app iteration 00:05:03.769 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:03.769 spdk_app_start is called in Round 1. 00:05:03.769 Shutdown signal received, stop current app iteration 00:05:03.769 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:03.769 spdk_app_start is called in Round 2. 00:05:03.769 Shutdown signal received, stop current app iteration 00:05:03.769 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:03.769 spdk_app_start is called in Round 3. 00:05:03.769 Shutdown signal received, stop current app iteration 00:05:03.769 13:09:18 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:03.769 13:09:18 -- event/event.sh@42 -- # return 0 00:05:03.769 00:05:03.769 real 0m17.123s 00:05:03.769 user 0m36.714s 00:05:03.769 sys 0m1.938s 00:05:03.769 13:09:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.769 13:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 ************************************ 00:05:03.769 END TEST app_repeat 00:05:03.769 ************************************ 00:05:03.769 13:09:18 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:03.769 13:09:18 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:03.769 13:09:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.769 13:09:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.769 13:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 ************************************ 00:05:03.769 START TEST cpu_locks 00:05:03.769 ************************************ 00:05:03.769 13:09:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:03.769 * Looking for test storage... 00:05:03.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.769 13:09:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.769 13:09:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.769 13:09:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.769 13:09:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.769 13:09:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.769 13:09:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.769 13:09:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.769 13:09:18 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.769 13:09:18 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.769 13:09:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.769 13:09:18 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.769 13:09:18 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.769 13:09:18 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.769 13:09:18 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.769 13:09:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.769 13:09:18 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.769 13:09:18 -- scripts/common.sh@344 -- # : 1 00:05:03.769 13:09:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.769 13:09:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.769 13:09:18 -- scripts/common.sh@364 -- # decimal 1 00:05:03.769 13:09:18 -- scripts/common.sh@352 -- # local d=1 00:05:03.769 13:09:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.769 13:09:18 -- scripts/common.sh@354 -- # echo 1 00:05:03.769 13:09:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.769 13:09:18 -- scripts/common.sh@365 -- # decimal 2 00:05:03.769 13:09:18 -- scripts/common.sh@352 -- # local d=2 00:05:03.769 13:09:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.769 13:09:18 -- scripts/common.sh@354 -- # echo 2 00:05:03.769 13:09:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.769 13:09:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.769 13:09:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.769 13:09:18 -- scripts/common.sh@367 -- # return 0 00:05:03.769 13:09:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.769 13:09:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 13:09:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 13:09:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 13:09:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:04.082 13:09:18 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:04.082 13:09:18 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:04.082 13:09:18 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:04.082 13:09:18 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:04.082 13:09:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.082 13:09:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.082 13:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.082 ************************************ 00:05:04.082 START TEST default_locks 00:05:04.082 ************************************ 00:05:04.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.082 13:09:18 -- common/autotest_common.sh@1114 -- # default_locks 00:05:04.082 13:09:18 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57516 00:05:04.082 13:09:18 -- event/cpu_locks.sh@47 -- # waitforlisten 57516 00:05:04.082 13:09:18 -- common/autotest_common.sh@829 -- # '[' -z 57516 ']' 00:05:04.082 13:09:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.082 13:09:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.082 13:09:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.082 13:09:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.082 13:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.082 13:09:18 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.082 [2024-12-16 13:09:18.422684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.082 [2024-12-16 13:09:18.422945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57516 ] 00:05:04.082 [2024-12-16 13:09:18.571266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.343 [2024-12-16 13:09:18.759546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.343 [2024-12-16 13:09:18.759785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.723 13:09:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.723 13:09:19 -- common/autotest_common.sh@862 -- # return 0 00:05:05.723 13:09:19 -- event/cpu_locks.sh@49 -- # locks_exist 57516 00:05:05.723 13:09:19 -- event/cpu_locks.sh@22 -- # lslocks -p 57516 00:05:05.723 13:09:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.723 13:09:20 -- event/cpu_locks.sh@50 -- # killprocess 57516 00:05:05.723 13:09:20 -- common/autotest_common.sh@936 -- # '[' -z 57516 ']' 00:05:05.723 13:09:20 -- common/autotest_common.sh@940 -- # kill -0 57516 00:05:05.723 13:09:20 -- common/autotest_common.sh@941 -- # uname 00:05:05.723 13:09:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.723 13:09:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57516 00:05:05.723 killing process with pid 57516 00:05:05.723 13:09:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.723 13:09:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.723 13:09:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57516' 00:05:05.723 13:09:20 -- common/autotest_common.sh@955 -- # kill 57516 00:05:05.723 13:09:20 -- common/autotest_common.sh@960 -- # wait 57516 00:05:07.098 13:09:21 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57516 00:05:07.098 13:09:21 -- common/autotest_common.sh@650 -- # local es=0 00:05:07.098 13:09:21 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57516 00:05:07.098 13:09:21 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:07.098 13:09:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.098 13:09:21 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:07.098 13:09:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.098 13:09:21 -- common/autotest_common.sh@653 -- # waitforlisten 57516 00:05:07.098 13:09:21 -- common/autotest_common.sh@829 -- # '[' -z 57516 ']' 00:05:07.098 13:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.098 13:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.098 13:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.098 13:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.098 ERROR: process (pid: 57516) is no longer running 00:05:07.098 13:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57516) - No such process 00:05:07.098 13:09:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.098 13:09:21 -- common/autotest_common.sh@862 -- # return 1 00:05:07.098 13:09:21 -- common/autotest_common.sh@653 -- # es=1 00:05:07.098 13:09:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.098 13:09:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.098 13:09:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.098 13:09:21 -- event/cpu_locks.sh@54 -- # no_locks 00:05:07.098 13:09:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.098 13:09:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.098 ************************************ 00:05:07.098 END TEST default_locks 00:05:07.098 ************************************ 00:05:07.098 13:09:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.098 00:05:07.098 real 0m3.095s 00:05:07.098 user 0m3.237s 00:05:07.098 sys 0m0.500s 00:05:07.098 13:09:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.098 13:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.098 13:09:21 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:07.098 13:09:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.098 13:09:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.098 13:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.098 ************************************ 00:05:07.098 START TEST default_locks_via_rpc 00:05:07.098 ************************************ 00:05:07.098 13:09:21 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:07.098 13:09:21 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57582 00:05:07.098 13:09:21 -- event/cpu_locks.sh@63 -- # waitforlisten 57582 00:05:07.098 13:09:21 -- common/autotest_common.sh@829 -- # '[' -z 57582 ']' 00:05:07.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.098 13:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.098 13:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.098 13:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.098 13:09:21 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.098 13:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.098 13:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.098 [2024-12-16 13:09:21.574185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.098 [2024-12-16 13:09:21.574294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57582 ] 00:05:07.357 [2024-12-16 13:09:21.721417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.357 [2024-12-16 13:09:21.857780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.357 [2024-12-16 13:09:21.857935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.923 13:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.923 13:09:22 -- common/autotest_common.sh@862 -- # return 0 00:05:07.923 13:09:22 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:07.923 13:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.923 13:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.923 13:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.923 13:09:22 -- event/cpu_locks.sh@67 -- # no_locks 00:05:07.923 13:09:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.923 13:09:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.923 13:09:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.923 13:09:22 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.923 13:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.923 13:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.924 13:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.924 13:09:22 -- event/cpu_locks.sh@71 -- # locks_exist 57582 00:05:07.924 13:09:22 -- event/cpu_locks.sh@22 -- # lslocks -p 57582 00:05:07.924 13:09:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.182 13:09:22 -- event/cpu_locks.sh@73 -- # killprocess 57582 00:05:08.182 13:09:22 -- common/autotest_common.sh@936 -- # '[' -z 57582 ']' 00:05:08.182 13:09:22 -- common/autotest_common.sh@940 -- # kill -0 57582 00:05:08.182 13:09:22 -- common/autotest_common.sh@941 -- # uname 00:05:08.182 13:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.182 13:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57582 00:05:08.182 killing process with pid 57582 00:05:08.182 13:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.182 13:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.182 13:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57582' 00:05:08.182 13:09:22 -- common/autotest_common.sh@955 -- # kill 57582 00:05:08.182 13:09:22 -- common/autotest_common.sh@960 -- # wait 57582 00:05:09.557 00:05:09.557 real 0m2.269s 00:05:09.557 user 0m2.255s 00:05:09.557 sys 0m0.423s 00:05:09.557 13:09:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.557 ************************************ 00:05:09.557 END TEST default_locks_via_rpc 00:05:09.557 ************************************ 00:05:09.557 13:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.557 13:09:23 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:09.557 13:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.557 13:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.557 13:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.557 ************************************ 00:05:09.557 START TEST non_locking_app_on_locked_coremask 00:05:09.557 ************************************ 00:05:09.557 13:09:23 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:09.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.557 13:09:23 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57634 00:05:09.557 13:09:23 -- event/cpu_locks.sh@81 -- # waitforlisten 57634 /var/tmp/spdk.sock 00:05:09.557 13:09:23 -- common/autotest_common.sh@829 -- # '[' -z 57634 ']' 00:05:09.557 13:09:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.557 13:09:23 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.557 13:09:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.557 13:09:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.557 13:09:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.557 13:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.557 [2024-12-16 13:09:23.887303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.557 [2024-12-16 13:09:23.887505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57634 ] 00:05:09.557 [2024-12-16 13:09:24.026544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.815 [2024-12-16 13:09:24.165358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.815 [2024-12-16 13:09:24.165521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.382 13:09:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.382 13:09:24 -- common/autotest_common.sh@862 -- # return 0 00:05:10.382 13:09:24 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57650 00:05:10.382 13:09:24 -- event/cpu_locks.sh@85 -- # waitforlisten 57650 /var/tmp/spdk2.sock 00:05:10.382 13:09:24 -- common/autotest_common.sh@829 -- # '[' -z 57650 ']' 00:05:10.382 13:09:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.382 13:09:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.382 13:09:24 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.382 13:09:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.382 13:09:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.382 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.382 [2024-12-16 13:09:24.806723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.382 [2024-12-16 13:09:24.807077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57650 ] 00:05:10.641 [2024-12-16 13:09:24.968061] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.641 [2024-12-16 13:09:24.968095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.908 [2024-12-16 13:09:25.247915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.908 [2024-12-16 13:09:25.248066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.857 13:09:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.857 13:09:26 -- common/autotest_common.sh@862 -- # return 0 00:05:11.857 13:09:26 -- event/cpu_locks.sh@87 -- # locks_exist 57634 00:05:11.857 13:09:26 -- event/cpu_locks.sh@22 -- # lslocks -p 57634 00:05:11.857 13:09:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.115 13:09:26 -- event/cpu_locks.sh@89 -- # killprocess 57634 00:05:12.115 13:09:26 -- common/autotest_common.sh@936 -- # '[' -z 57634 ']' 00:05:12.115 13:09:26 -- common/autotest_common.sh@940 -- # kill -0 57634 00:05:12.115 13:09:26 -- common/autotest_common.sh@941 -- # uname 00:05:12.115 13:09:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.115 13:09:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57634 00:05:12.115 killing process with pid 57634 00:05:12.115 13:09:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.115 13:09:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.115 13:09:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57634' 00:05:12.115 13:09:26 -- common/autotest_common.sh@955 -- # kill 57634 00:05:12.115 13:09:26 -- common/autotest_common.sh@960 -- # wait 57634 00:05:14.644 13:09:28 -- event/cpu_locks.sh@90 -- # killprocess 57650 00:05:14.644 13:09:29 -- common/autotest_common.sh@936 -- # '[' -z 57650 ']' 00:05:14.644 13:09:29 -- common/autotest_common.sh@940 -- # kill -0 57650 00:05:14.644 13:09:29 -- common/autotest_common.sh@941 -- # uname 00:05:14.644 13:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.644 13:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57650 00:05:14.644 killing process with pid 57650 00:05:14.644 13:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.644 13:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.644 13:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57650' 00:05:14.644 13:09:29 -- common/autotest_common.sh@955 -- # kill 57650 00:05:14.644 13:09:29 -- common/autotest_common.sh@960 -- # wait 57650 00:05:16.018 00:05:16.018 real 0m6.383s 00:05:16.018 user 0m6.843s 00:05:16.018 sys 0m0.822s 00:05:16.018 ************************************ 00:05:16.018 END TEST non_locking_app_on_locked_coremask 00:05:16.018 ************************************ 00:05:16.018 13:09:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.018 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.018 13:09:30 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:16.018 13:09:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.018 13:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.018 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.018 ************************************ 00:05:16.018 START TEST locking_app_on_unlocked_coremask 00:05:16.018 ************************************ 00:05:16.018 13:09:30 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:16.018 13:09:30 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57749 00:05:16.018 13:09:30 -- event/cpu_locks.sh@99 -- # waitforlisten 57749 /var/tmp/spdk.sock 00:05:16.018 13:09:30 -- common/autotest_common.sh@829 -- # '[' -z 57749 ']' 00:05:16.018 13:09:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.018 13:09:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.018 13:09:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.018 13:09:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.018 13:09:30 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:16.018 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.018 [2024-12-16 13:09:30.332404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.018 [2024-12-16 13:09:30.332490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57749 ] 00:05:16.018 [2024-12-16 13:09:30.473751] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.018 [2024-12-16 13:09:30.473783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.276 [2024-12-16 13:09:30.613280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.276 [2024-12-16 13:09:30.613430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.841 13:09:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.841 13:09:31 -- common/autotest_common.sh@862 -- # return 0 00:05:16.842 13:09:31 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57765 00:05:16.842 13:09:31 -- event/cpu_locks.sh@103 -- # waitforlisten 57765 /var/tmp/spdk2.sock 00:05:16.842 13:09:31 -- common/autotest_common.sh@829 -- # '[' -z 57765 ']' 00:05:16.842 13:09:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.842 13:09:31 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.842 13:09:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.842 13:09:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.842 13:09:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.842 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:05:16.842 [2024-12-16 13:09:31.216639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.842 [2024-12-16 13:09:31.216749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57765 ] 00:05:16.842 [2024-12-16 13:09:31.364670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.100 [2024-12-16 13:09:31.645951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.100 [2024-12-16 13:09:31.646098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.485 13:09:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.485 13:09:32 -- common/autotest_common.sh@862 -- # return 0 00:05:18.485 13:09:32 -- event/cpu_locks.sh@105 -- # locks_exist 57765 00:05:18.485 13:09:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.485 13:09:32 -- event/cpu_locks.sh@22 -- # lslocks -p 57765 00:05:18.485 13:09:33 -- event/cpu_locks.sh@107 -- # killprocess 57749 00:05:18.485 13:09:33 -- common/autotest_common.sh@936 -- # '[' -z 57749 ']' 00:05:18.485 13:09:33 -- common/autotest_common.sh@940 -- # kill -0 57749 00:05:18.485 13:09:33 -- common/autotest_common.sh@941 -- # uname 00:05:18.485 13:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.485 13:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57749 00:05:18.485 13:09:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.485 killing process with pid 57749 00:05:18.485 13:09:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.485 13:09:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57749' 00:05:18.485 13:09:33 -- common/autotest_common.sh@955 -- # kill 57749 00:05:18.485 13:09:33 -- common/autotest_common.sh@960 -- # wait 57749 00:05:21.013 13:09:35 -- event/cpu_locks.sh@108 -- # killprocess 57765 00:05:21.013 13:09:35 -- common/autotest_common.sh@936 -- # '[' -z 57765 ']' 00:05:21.013 13:09:35 -- common/autotest_common.sh@940 -- # kill -0 57765 00:05:21.013 13:09:35 -- common/autotest_common.sh@941 -- # uname 00:05:21.013 13:09:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.013 13:09:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57765 00:05:21.013 13:09:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.013 killing process with pid 57765 00:05:21.013 13:09:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.013 13:09:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57765' 00:05:21.013 13:09:35 -- common/autotest_common.sh@955 -- # kill 57765 00:05:21.013 13:09:35 -- common/autotest_common.sh@960 -- # wait 57765 00:05:22.387 00:05:22.387 real 0m6.355s 00:05:22.387 user 0m6.759s 00:05:22.387 sys 0m0.804s 00:05:22.387 ************************************ 00:05:22.387 END TEST locking_app_on_unlocked_coremask 00:05:22.387 ************************************ 00:05:22.387 13:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.387 13:09:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.387 13:09:36 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:22.387 13:09:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.387 13:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.387 13:09:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.387 ************************************ 00:05:22.387 START TEST locking_app_on_locked_coremask 00:05:22.387 ************************************ 00:05:22.387 13:09:36 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:22.387 13:09:36 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57862 00:05:22.387 13:09:36 -- event/cpu_locks.sh@116 -- # waitforlisten 57862 /var/tmp/spdk.sock 00:05:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.387 13:09:36 -- common/autotest_common.sh@829 -- # '[' -z 57862 ']' 00:05:22.387 13:09:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.387 13:09:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.387 13:09:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.387 13:09:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.387 13:09:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.387 13:09:36 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.387 [2024-12-16 13:09:36.748513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.387 [2024-12-16 13:09:36.748620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57862 ] 00:05:22.387 [2024-12-16 13:09:36.893514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.645 [2024-12-16 13:09:37.030402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.645 [2024-12-16 13:09:37.030552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.211 13:09:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.211 13:09:37 -- common/autotest_common.sh@862 -- # return 0 00:05:23.211 13:09:37 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57874 00:05:23.211 13:09:37 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57874 /var/tmp/spdk2.sock 00:05:23.211 13:09:37 -- common/autotest_common.sh@650 -- # local es=0 00:05:23.211 13:09:37 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57874 /var/tmp/spdk2.sock 00:05:23.211 13:09:37 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.211 13:09:37 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:23.211 13:09:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.211 13:09:37 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:23.211 13:09:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.211 13:09:37 -- common/autotest_common.sh@653 -- # waitforlisten 57874 /var/tmp/spdk2.sock 00:05:23.211 13:09:37 -- common/autotest_common.sh@829 -- # '[' -z 57874 ']' 00:05:23.211 13:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.211 13:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.211 13:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.211 13:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.211 13:09:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.211 [2024-12-16 13:09:37.587713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.211 [2024-12-16 13:09:37.587819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57874 ] 00:05:23.211 [2024-12-16 13:09:37.734708] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57862 has claimed it. 00:05:23.211 [2024-12-16 13:09:37.734749] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57874) - No such process 00:05:23.776 ERROR: process (pid: 57874) is no longer running 00:05:23.776 13:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.776 13:09:38 -- common/autotest_common.sh@862 -- # return 1 00:05:23.776 13:09:38 -- common/autotest_common.sh@653 -- # es=1 00:05:23.776 13:09:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:23.776 13:09:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:23.776 13:09:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:23.776 13:09:38 -- event/cpu_locks.sh@122 -- # locks_exist 57862 00:05:23.776 13:09:38 -- event/cpu_locks.sh@22 -- # lslocks -p 57862 00:05:23.776 13:09:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.034 13:09:38 -- event/cpu_locks.sh@124 -- # killprocess 57862 00:05:24.035 13:09:38 -- common/autotest_common.sh@936 -- # '[' -z 57862 ']' 00:05:24.035 13:09:38 -- common/autotest_common.sh@940 -- # kill -0 57862 00:05:24.035 13:09:38 -- common/autotest_common.sh@941 -- # uname 00:05:24.035 13:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.035 13:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57862 00:05:24.035 13:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.035 killing process with pid 57862 00:05:24.035 13:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.035 13:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57862' 00:05:24.035 13:09:38 -- common/autotest_common.sh@955 -- # kill 57862 00:05:24.035 13:09:38 -- common/autotest_common.sh@960 -- # wait 57862 00:05:25.447 00:05:25.447 real 0m2.967s 00:05:25.447 user 0m3.139s 00:05:25.447 sys 0m0.504s 00:05:25.447 13:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.447 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.447 ************************************ 00:05:25.447 END TEST locking_app_on_locked_coremask 00:05:25.447 ************************************ 00:05:25.447 13:09:39 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.447 13:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.447 13:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.447 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.447 ************************************ 00:05:25.447 START TEST locking_overlapped_coremask 00:05:25.447 ************************************ 00:05:25.447 13:09:39 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:25.447 13:09:39 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57927 00:05:25.447 13:09:39 -- event/cpu_locks.sh@133 -- # waitforlisten 57927 /var/tmp/spdk.sock 00:05:25.447 13:09:39 -- common/autotest_common.sh@829 -- # '[' -z 57927 ']' 00:05:25.447 13:09:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.447 13:09:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.447 13:09:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.447 13:09:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.447 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.447 13:09:39 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.447 [2024-12-16 13:09:39.779904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.447 [2024-12-16 13:09:39.780006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57927 ] 00:05:25.447 [2024-12-16 13:09:39.920295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.705 [2024-12-16 13:09:40.066136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.705 [2024-12-16 13:09:40.066415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.705 [2024-12-16 13:09:40.066654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.705 [2024-12-16 13:09:40.066700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.271 13:09:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.271 13:09:40 -- common/autotest_common.sh@862 -- # return 0 00:05:26.271 13:09:40 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:26.271 13:09:40 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57945 00:05:26.271 13:09:40 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57945 /var/tmp/spdk2.sock 00:05:26.271 13:09:40 -- common/autotest_common.sh@650 -- # local es=0 00:05:26.271 13:09:40 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57945 /var/tmp/spdk2.sock 00:05:26.271 13:09:40 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.271 13:09:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.271 13:09:40 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.271 13:09:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.271 13:09:40 -- common/autotest_common.sh@653 -- # waitforlisten 57945 /var/tmp/spdk2.sock 00:05:26.271 13:09:40 -- common/autotest_common.sh@829 -- # '[' -z 57945 ']' 00:05:26.271 13:09:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.271 13:09:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.271 13:09:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.271 13:09:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.271 13:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.271 [2024-12-16 13:09:40.603886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.271 [2024-12-16 13:09:40.603981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 00:05:26.271 [2024-12-16 13:09:40.757685] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57927 has claimed it. 00:05:26.271 [2024-12-16 13:09:40.757738] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.838 ERROR: process (pid: 57945) is no longer running 00:05:26.838 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57945) - No such process 00:05:26.838 13:09:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.838 13:09:41 -- common/autotest_common.sh@862 -- # return 1 00:05:26.838 13:09:41 -- common/autotest_common.sh@653 -- # es=1 00:05:26.838 13:09:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.838 13:09:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.838 13:09:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.838 13:09:41 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:26.838 13:09:41 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.838 13:09:41 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.838 13:09:41 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.838 13:09:41 -- event/cpu_locks.sh@141 -- # killprocess 57927 00:05:26.838 13:09:41 -- common/autotest_common.sh@936 -- # '[' -z 57927 ']' 00:05:26.838 13:09:41 -- common/autotest_common.sh@940 -- # kill -0 57927 00:05:26.838 13:09:41 -- common/autotest_common.sh@941 -- # uname 00:05:26.838 13:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.838 13:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57927 00:05:26.838 13:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.838 13:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.838 13:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57927' 00:05:26.838 killing process with pid 57927 00:05:26.838 13:09:41 -- common/autotest_common.sh@955 -- # kill 57927 00:05:26.838 13:09:41 -- common/autotest_common.sh@960 -- # wait 57927 00:05:28.213 00:05:28.213 real 0m2.744s 00:05:28.213 user 0m7.162s 00:05:28.213 sys 0m0.391s 00:05:28.213 13:09:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.213 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.213 ************************************ 00:05:28.213 END TEST locking_overlapped_coremask 00:05:28.213 ************************************ 00:05:28.213 13:09:42 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.213 13:09:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.213 13:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.213 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.213 ************************************ 00:05:28.213 START TEST locking_overlapped_coremask_via_rpc 00:05:28.213 ************************************ 00:05:28.213 13:09:42 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:28.213 13:09:42 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57998 00:05:28.213 13:09:42 -- event/cpu_locks.sh@149 -- # waitforlisten 57998 /var/tmp/spdk.sock 00:05:28.213 13:09:42 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:05:28.213 13:09:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.213 13:09:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.213 13:09:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.213 13:09:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.213 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.213 13:09:42 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.213 [2024-12-16 13:09:42.573401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.213 [2024-12-16 13:09:42.573510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57998 ] 00:05:28.213 [2024-12-16 13:09:42.715804] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.213 [2024-12-16 13:09:42.715837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.471 [2024-12-16 13:09:42.854902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.471 [2024-12-16 13:09:42.855206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.471 [2024-12-16 13:09:42.855507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.471 [2024-12-16 13:09:42.855427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.038 13:09:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.038 13:09:43 -- common/autotest_common.sh@862 -- # return 0 00:05:29.038 13:09:43 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58016 00:05:29.038 13:09:43 -- event/cpu_locks.sh@153 -- # waitforlisten 58016 /var/tmp/spdk2.sock 00:05:29.038 13:09:43 -- common/autotest_common.sh@829 -- # '[' -z 58016 ']' 00:05:29.038 13:09:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.038 13:09:43 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.038 13:09:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.038 13:09:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.038 13:09:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.038 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.038 [2024-12-16 13:09:43.419117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.038 [2024-12-16 13:09:43.419415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58016 ] 00:05:29.038 [2024-12-16 13:09:43.572487] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.038 [2024-12-16 13:09:43.572528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.604 [2024-12-16 13:09:43.918207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.604 [2024-12-16 13:09:43.918668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.604 [2024-12-16 13:09:43.921700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.604 [2024-12-16 13:09:43.921719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.538 13:09:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.538 13:09:44 -- common/autotest_common.sh@862 -- # return 0 00:05:30.538 13:09:44 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.538 13:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.538 13:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.538 13:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.538 13:09:44 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.538 13:09:44 -- common/autotest_common.sh@650 -- # local es=0 00:05:30.538 13:09:44 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.538 13:09:44 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:30.538 13:09:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.538 13:09:44 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:30.538 13:09:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.538 13:09:44 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.538 13:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.538 13:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.538 [2024-12-16 13:09:44.926751] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57998 has claimed it. 00:05:30.538 request: 00:05:30.538 { 00:05:30.538 "method": "framework_enable_cpumask_locks", 00:05:30.538 "req_id": 1 00:05:30.538 } 00:05:30.538 Got JSON-RPC error response 00:05:30.538 response: 00:05:30.538 { 00:05:30.538 "code": -32603, 00:05:30.538 "message": "Failed to claim CPU core: 2" 00:05:30.538 } 00:05:30.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.538 13:09:44 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:30.538 13:09:44 -- common/autotest_common.sh@653 -- # es=1 00:05:30.538 13:09:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.538 13:09:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.538 13:09:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.538 13:09:44 -- event/cpu_locks.sh@158 -- # waitforlisten 57998 /var/tmp/spdk.sock 00:05:30.538 13:09:44 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:05:30.538 13:09:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.538 13:09:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.538 13:09:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.538 13:09:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.538 13:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.795 13:09:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.795 13:09:45 -- common/autotest_common.sh@862 -- # return 0 00:05:30.795 13:09:45 -- event/cpu_locks.sh@159 -- # waitforlisten 58016 /var/tmp/spdk2.sock 00:05:30.795 13:09:45 -- common/autotest_common.sh@829 -- # '[' -z 58016 ']' 00:05:30.795 13:09:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.795 13:09:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.795 13:09:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.795 13:09:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.795 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 ************************************ 00:05:30.795 END TEST locking_overlapped_coremask_via_rpc 00:05:30.795 ************************************ 00:05:30.795 13:09:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.795 13:09:45 -- common/autotest_common.sh@862 -- # return 0 00:05:30.795 13:09:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.795 13:09:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.795 13:09:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.795 13:09:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.795 00:05:30.795 real 0m2.812s 00:05:30.795 user 0m1.101s 00:05:30.795 sys 0m0.141s 00:05:30.795 13:09:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.795 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 13:09:45 -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.795 13:09:45 -- event/cpu_locks.sh@15 -- # [[ -z 57998 ]] 00:05:30.795 13:09:45 -- event/cpu_locks.sh@15 -- # killprocess 57998 00:05:30.795 13:09:45 -- common/autotest_common.sh@936 -- # '[' -z 57998 ']' 00:05:30.795 13:09:45 -- common/autotest_common.sh@940 -- # kill -0 57998 00:05:30.795 13:09:45 -- common/autotest_common.sh@941 -- # uname 00:05:30.795 13:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.795 13:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57998 00:05:30.795 killing process with pid 57998 00:05:30.795 13:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.795 13:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.795 13:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57998' 00:05:30.796 13:09:45 -- common/autotest_common.sh@955 -- # kill 57998 00:05:30.796 13:09:45 -- common/autotest_common.sh@960 -- # wait 57998 00:05:32.170 13:09:46 -- event/cpu_locks.sh@16 -- # [[ -z 58016 ]] 00:05:32.170 13:09:46 -- event/cpu_locks.sh@16 -- # killprocess 58016 00:05:32.170 13:09:46 -- common/autotest_common.sh@936 -- # '[' -z 58016 ']' 00:05:32.170 13:09:46 -- common/autotest_common.sh@940 -- # kill -0 58016 00:05:32.170 13:09:46 -- common/autotest_common.sh@941 -- # uname 00:05:32.170 13:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.170 13:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58016 00:05:32.170 killing process with pid 58016 00:05:32.170 13:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:32.170 13:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:32.170 13:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58016' 00:05:32.170 13:09:46 -- common/autotest_common.sh@955 -- # kill 58016 00:05:32.170 13:09:46 -- common/autotest_common.sh@960 -- # wait 58016 00:05:33.543 13:09:47 -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.543 13:09:47 -- event/cpu_locks.sh@1 -- # cleanup 00:05:33.543 13:09:47 -- event/cpu_locks.sh@15 -- # [[ -z 57998 ]] 00:05:33.543 13:09:47 -- event/cpu_locks.sh@15 -- # killprocess 57998 00:05:33.543 13:09:47 -- common/autotest_common.sh@936 -- # '[' -z 57998 ']' 00:05:33.543 13:09:47 -- common/autotest_common.sh@940 -- # kill -0 57998 00:05:33.543 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57998) - No such process 00:05:33.543 Process with pid 57998 is not found 00:05:33.543 Process with pid 58016 is not found 00:05:33.543 13:09:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 57998 is not found' 00:05:33.543 13:09:47 -- event/cpu_locks.sh@16 -- # [[ -z 58016 ]] 00:05:33.543 13:09:47 -- event/cpu_locks.sh@16 -- # killprocess 58016 00:05:33.543 13:09:47 -- common/autotest_common.sh@936 -- # '[' -z 58016 ']' 00:05:33.543 13:09:47 -- common/autotest_common.sh@940 -- # kill -0 58016 00:05:33.543 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58016) - No such process 00:05:33.543 13:09:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58016 is not found' 00:05:33.543 13:09:47 -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.543 ************************************ 00:05:33.543 END TEST cpu_locks 00:05:33.543 ************************************ 00:05:33.543 00:05:33.543 real 0m29.530s 00:05:33.543 user 0m49.664s 00:05:33.543 sys 0m4.320s 00:05:33.543 13:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.543 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 ************************************ 00:05:33.543 END TEST event 00:05:33.543 ************************************ 00:05:33.543 00:05:33.543 real 0m55.211s 00:05:33.543 user 1m38.893s 00:05:33.543 sys 0m7.016s 00:05:33.543 13:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.543 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 13:09:47 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:33.543 13:09:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.543 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 ************************************ 00:05:33.543 START TEST thread 00:05:33.543 ************************************ 00:05:33.543 13:09:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:33.543 * Looking for test storage... 00:05:33.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:33.543 13:09:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.543 13:09:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.543 13:09:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.543 13:09:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.543 13:09:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.543 13:09:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.543 13:09:47 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.543 13:09:47 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.543 13:09:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.543 13:09:47 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.543 13:09:47 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.543 13:09:47 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.543 13:09:47 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.543 13:09:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.543 13:09:47 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.543 13:09:47 -- scripts/common.sh@344 -- # : 1 00:05:33.543 13:09:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.543 13:09:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.543 13:09:47 -- scripts/common.sh@364 -- # decimal 1 00:05:33.543 13:09:47 -- scripts/common.sh@352 -- # local d=1 00:05:33.543 13:09:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.543 13:09:47 -- scripts/common.sh@354 -- # echo 1 00:05:33.543 13:09:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.543 13:09:47 -- scripts/common.sh@365 -- # decimal 2 00:05:33.543 13:09:47 -- scripts/common.sh@352 -- # local d=2 00:05:33.543 13:09:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.543 13:09:47 -- scripts/common.sh@354 -- # echo 2 00:05:33.543 13:09:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.543 13:09:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.543 13:09:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.543 13:09:47 -- scripts/common.sh@367 -- # return 0 00:05:33.543 13:09:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.543 --rc genhtml_branch_coverage=1 00:05:33.543 --rc genhtml_function_coverage=1 00:05:33.543 --rc genhtml_legend=1 00:05:33.543 --rc geninfo_all_blocks=1 00:05:33.543 --rc geninfo_unexecuted_blocks=1 00:05:33.543 00:05:33.543 ' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.543 --rc genhtml_branch_coverage=1 00:05:33.543 --rc genhtml_function_coverage=1 00:05:33.543 --rc genhtml_legend=1 00:05:33.543 --rc geninfo_all_blocks=1 00:05:33.543 --rc geninfo_unexecuted_blocks=1 00:05:33.543 00:05:33.543 ' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.543 --rc genhtml_branch_coverage=1 00:05:33.543 --rc genhtml_function_coverage=1 00:05:33.543 --rc genhtml_legend=1 00:05:33.543 --rc geninfo_all_blocks=1 00:05:33.543 --rc geninfo_unexecuted_blocks=1 00:05:33.543 00:05:33.543 ' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.543 --rc genhtml_branch_coverage=1 00:05:33.543 --rc genhtml_function_coverage=1 00:05:33.543 --rc genhtml_legend=1 00:05:33.543 --rc geninfo_all_blocks=1 00:05:33.543 --rc geninfo_unexecuted_blocks=1 00:05:33.543 00:05:33.543 ' 00:05:33.543 13:09:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.543 13:09:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:33.543 13:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.543 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 ************************************ 00:05:33.543 START TEST thread_poller_perf 00:05:33.543 ************************************ 00:05:33.543 13:09:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.543 [2024-12-16 13:09:47.973093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.543 [2024-12-16 13:09:47.973246] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:05:33.543 [2024-12-16 13:09:48.114112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.801 [2024-12-16 13:09:48.252750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.801 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.175 [2024-12-16T13:09:49.749Z] ====================================== 00:05:35.175 [2024-12-16T13:09:49.749Z] busy:2607883352 (cyc) 00:05:35.175 [2024-12-16T13:09:49.749Z] total_run_count: 387000 00:05:35.175 [2024-12-16T13:09:49.749Z] tsc_hz: 2600000000 (cyc) 00:05:35.175 [2024-12-16T13:09:49.749Z] ====================================== 00:05:35.175 [2024-12-16T13:09:49.749Z] poller_cost: 6738 (cyc), 2591 (nsec) 00:05:35.175 00:05:35.175 ************************************ 00:05:35.175 END TEST thread_poller_perf 00:05:35.175 ************************************ 00:05:35.175 real 0m1.517s 00:05:35.175 user 0m1.340s 00:05:35.175 sys 0m0.069s 00:05:35.175 13:09:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.175 13:09:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.175 13:09:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.175 13:09:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:35.175 13:09:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.175 13:09:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.175 ************************************ 00:05:35.175 START TEST thread_poller_perf 00:05:35.175 ************************************ 00:05:35.175 13:09:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.175 [2024-12-16 13:09:49.538078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.175 [2024-12-16 13:09:49.538289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58208 ] 00:05:35.175 [2024-12-16 13:09:49.685431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.434 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.434 [2024-12-16 13:09:49.824159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.809 [2024-12-16T13:09:51.383Z] ====================================== 00:05:36.809 [2024-12-16T13:09:51.383Z] busy:2603378438 (cyc) 00:05:36.809 [2024-12-16T13:09:51.383Z] total_run_count: 5321000 00:05:36.809 [2024-12-16T13:09:51.383Z] tsc_hz: 2600000000 (cyc) 00:05:36.809 [2024-12-16T13:09:51.383Z] ====================================== 00:05:36.809 [2024-12-16T13:09:51.383Z] poller_cost: 489 (cyc), 188 (nsec) 00:05:36.809 ************************************ 00:05:36.809 END TEST thread_poller_perf 00:05:36.809 ************************************ 00:05:36.809 00:05:36.809 real 0m1.518s 00:05:36.809 user 0m1.343s 00:05:36.809 sys 0m0.068s 00:05:36.809 13:09:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.809 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.809 13:09:51 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.809 00:05:36.809 real 0m3.264s 00:05:36.809 user 0m2.780s 00:05:36.809 sys 0m0.251s 00:05:36.809 13:09:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.809 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.809 ************************************ 00:05:36.809 END TEST thread 00:05:36.809 ************************************ 00:05:36.809 13:09:51 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:36.809 13:09:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.809 13:09:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.809 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.809 ************************************ 00:05:36.809 START TEST accel 00:05:36.809 ************************************ 00:05:36.809 13:09:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:36.809 * Looking for test storage... 00:05:36.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:36.809 13:09:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.809 13:09:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.809 13:09:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.809 13:09:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.809 13:09:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.809 13:09:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.809 13:09:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.809 13:09:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.809 13:09:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.809 13:09:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.809 13:09:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.809 13:09:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.809 13:09:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.809 13:09:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.809 13:09:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.809 13:09:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.809 13:09:51 -- scripts/common.sh@344 -- # : 1 00:05:36.809 13:09:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.809 13:09:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.809 13:09:51 -- scripts/common.sh@364 -- # decimal 1 00:05:36.809 13:09:51 -- scripts/common.sh@352 -- # local d=1 00:05:36.809 13:09:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.809 13:09:51 -- scripts/common.sh@354 -- # echo 1 00:05:36.809 13:09:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.809 13:09:51 -- scripts/common.sh@365 -- # decimal 2 00:05:36.809 13:09:51 -- scripts/common.sh@352 -- # local d=2 00:05:36.809 13:09:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.809 13:09:51 -- scripts/common.sh@354 -- # echo 2 00:05:36.809 13:09:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.809 13:09:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.809 13:09:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.809 13:09:51 -- scripts/common.sh@367 -- # return 0 00:05:36.809 13:09:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.809 13:09:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.809 --rc genhtml_branch_coverage=1 00:05:36.809 --rc genhtml_function_coverage=1 00:05:36.809 --rc genhtml_legend=1 00:05:36.809 --rc geninfo_all_blocks=1 00:05:36.809 --rc geninfo_unexecuted_blocks=1 00:05:36.809 00:05:36.809 ' 00:05:36.809 13:09:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.809 --rc genhtml_branch_coverage=1 00:05:36.809 --rc genhtml_function_coverage=1 00:05:36.809 --rc genhtml_legend=1 00:05:36.809 --rc geninfo_all_blocks=1 00:05:36.809 --rc geninfo_unexecuted_blocks=1 00:05:36.809 00:05:36.809 ' 00:05:36.809 13:09:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.809 --rc genhtml_branch_coverage=1 00:05:36.809 --rc genhtml_function_coverage=1 00:05:36.809 --rc genhtml_legend=1 00:05:36.809 --rc geninfo_all_blocks=1 00:05:36.809 --rc geninfo_unexecuted_blocks=1 00:05:36.809 00:05:36.809 ' 00:05:36.809 13:09:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.809 --rc genhtml_branch_coverage=1 00:05:36.809 --rc genhtml_function_coverage=1 00:05:36.809 --rc genhtml_legend=1 00:05:36.809 --rc geninfo_all_blocks=1 00:05:36.809 --rc geninfo_unexecuted_blocks=1 00:05:36.809 00:05:36.809 ' 00:05:36.809 13:09:51 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:36.809 13:09:51 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:36.809 13:09:51 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.810 13:09:51 -- accel/accel.sh@59 -- # spdk_tgt_pid=58289 00:05:36.810 13:09:51 -- accel/accel.sh@60 -- # waitforlisten 58289 00:05:36.810 13:09:51 -- common/autotest_common.sh@829 -- # '[' -z 58289 ']' 00:05:36.810 13:09:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.810 13:09:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.810 13:09:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.810 13:09:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.810 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.810 13:09:51 -- accel/accel.sh@58 -- # build_accel_config 00:05:36.810 13:09:51 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:36.810 13:09:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.810 13:09:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.810 13:09:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.810 13:09:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.810 13:09:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.810 13:09:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.810 13:09:51 -- accel/accel.sh@42 -- # jq -r . 00:05:36.810 [2024-12-16 13:09:51.296909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.810 [2024-12-16 13:09:51.296998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:05:37.069 [2024-12-16 13:09:51.437999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.069 [2024-12-16 13:09:51.575250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.069 [2024-12-16 13:09:51.575543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.637 13:09:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.637 13:09:52 -- common/autotest_common.sh@862 -- # return 0 00:05:37.637 13:09:52 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:37.637 13:09:52 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:37.637 13:09:52 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:37.637 13:09:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.637 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.637 13:09:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.637 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.637 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.637 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.637 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.637 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.637 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.637 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.637 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.637 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.637 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.637 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # IFS== 00:05:37.638 13:09:52 -- accel/accel.sh@64 -- # read -r opc module 00:05:37.638 13:09:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:37.638 13:09:52 -- accel/accel.sh@67 -- # killprocess 58289 00:05:37.638 13:09:52 -- common/autotest_common.sh@936 -- # '[' -z 58289 ']' 00:05:37.638 13:09:52 -- common/autotest_common.sh@940 -- # kill -0 58289 00:05:37.638 13:09:52 -- common/autotest_common.sh@941 -- # uname 00:05:37.638 13:09:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.638 13:09:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58289 00:05:37.638 13:09:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.638 13:09:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.638 13:09:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58289' 00:05:37.638 killing process with pid 58289 00:05:37.638 13:09:52 -- common/autotest_common.sh@955 -- # kill 58289 00:05:37.638 13:09:52 -- common/autotest_common.sh@960 -- # wait 58289 00:05:39.019 13:09:53 -- accel/accel.sh@68 -- # trap - ERR 00:05:39.019 13:09:53 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:39.019 13:09:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:39.019 13:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.019 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.019 13:09:53 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:39.019 13:09:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:39.019 13:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.019 13:09:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.019 13:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.019 13:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.019 13:09:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.019 13:09:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.019 13:09:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.019 13:09:53 -- accel/accel.sh@42 -- # jq -r . 00:05:39.019 13:09:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.019 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.019 13:09:53 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:39.019 13:09:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:39.019 13:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.019 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.019 ************************************ 00:05:39.019 START TEST accel_missing_filename 00:05:39.019 ************************************ 00:05:39.019 13:09:53 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:39.019 13:09:53 -- common/autotest_common.sh@650 -- # local es=0 00:05:39.019 13:09:53 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:39.019 13:09:53 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:39.019 13:09:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.019 13:09:53 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:39.019 13:09:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.019 13:09:53 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:39.019 13:09:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:39.019 13:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.020 13:09:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.020 13:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.020 13:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.020 13:09:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.020 13:09:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.020 13:09:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.020 13:09:53 -- accel/accel.sh@42 -- # jq -r . 00:05:39.020 [2024-12-16 13:09:53.455770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.020 [2024-12-16 13:09:53.455877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58349 ] 00:05:39.278 [2024-12-16 13:09:53.602299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.278 [2024-12-16 13:09:53.740150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.536 [2024-12-16 13:09:53.850859] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.795 [2024-12-16 13:09:54.112355] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:39.795 A filename is required. 00:05:39.795 13:09:54 -- common/autotest_common.sh@653 -- # es=234 00:05:39.795 13:09:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.795 13:09:54 -- common/autotest_common.sh@662 -- # es=106 00:05:39.795 13:09:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:39.795 13:09:54 -- common/autotest_common.sh@670 -- # es=1 00:05:39.795 13:09:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.795 00:05:39.795 real 0m0.906s 00:05:39.795 user 0m0.702s 00:05:39.795 sys 0m0.125s 00:05:39.795 13:09:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.795 ************************************ 00:05:39.795 13:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.795 END TEST accel_missing_filename 00:05:39.795 ************************************ 00:05:40.051 13:09:54 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.051 13:09:54 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:40.051 13:09:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.051 13:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.051 ************************************ 00:05:40.051 START TEST accel_compress_verify 00:05:40.051 ************************************ 00:05:40.051 13:09:54 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.051 13:09:54 -- common/autotest_common.sh@650 -- # local es=0 00:05:40.051 13:09:54 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.051 13:09:54 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:40.051 13:09:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.051 13:09:54 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:40.051 13:09:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.051 13:09:54 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.052 13:09:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.052 13:09:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.052 13:09:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.052 13:09:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.052 13:09:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.052 13:09:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.052 13:09:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.052 13:09:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.052 13:09:54 -- accel/accel.sh@42 -- # jq -r . 00:05:40.052 [2024-12-16 13:09:54.409239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.052 [2024-12-16 13:09:54.409346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58380 ] 00:05:40.052 [2024-12-16 13:09:54.556818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.310 [2024-12-16 13:09:54.695553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.310 [2024-12-16 13:09:54.806214] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.569 [2024-12-16 13:09:55.066270] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:40.830 00:05:40.830 Compression does not support the verify option, aborting. 00:05:40.830 13:09:55 -- common/autotest_common.sh@653 -- # es=161 00:05:40.830 13:09:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.830 13:09:55 -- common/autotest_common.sh@662 -- # es=33 00:05:40.830 13:09:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:40.830 13:09:55 -- common/autotest_common.sh@670 -- # es=1 00:05:40.830 13:09:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.830 00:05:40.830 real 0m0.899s 00:05:40.830 user 0m0.708s 00:05:40.830 sys 0m0.112s 00:05:40.830 13:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.830 ************************************ 00:05:40.830 END TEST accel_compress_verify 00:05:40.830 ************************************ 00:05:40.830 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.830 13:09:55 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:40.830 13:09:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:40.830 13:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.830 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.830 ************************************ 00:05:40.830 START TEST accel_wrong_workload 00:05:40.830 ************************************ 00:05:40.830 13:09:55 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:40.830 13:09:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:40.830 13:09:55 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:40.830 13:09:55 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:40.830 13:09:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.830 13:09:55 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:40.830 13:09:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.830 13:09:55 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:40.830 13:09:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:40.830 13:09:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.830 13:09:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.830 13:09:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.830 13:09:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.830 13:09:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.830 13:09:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.830 13:09:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.830 13:09:55 -- accel/accel.sh@42 -- # jq -r . 00:05:40.830 Unsupported workload type: foobar 00:05:40.830 [2024-12-16 13:09:55.369288] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:40.830 accel_perf options: 00:05:40.830 [-h help message] 00:05:40.830 [-q queue depth per core] 00:05:40.830 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:40.830 [-T number of threads per core 00:05:40.830 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:40.830 [-t time in seconds] 00:05:40.830 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:40.830 [ dif_verify, , dif_generate, dif_generate_copy 00:05:40.830 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:40.830 [-l for compress/decompress workloads, name of uncompressed input file 00:05:40.830 [-S for crc32c workload, use this seed value (default 0) 00:05:40.830 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:40.830 [-f for fill workload, use this BYTE value (default 255) 00:05:40.830 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:40.830 [-y verify result if this switch is on] 00:05:40.830 [-a tasks to allocate per core (default: same value as -q)] 00:05:40.830 Can be used to spread operations across a wider range of memory. 00:05:40.830 13:09:55 -- common/autotest_common.sh@653 -- # es=1 00:05:40.830 13:09:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.830 13:09:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.830 13:09:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.830 00:05:40.830 real 0m0.055s 00:05:40.830 user 0m0.052s 00:05:40.830 sys 0m0.032s 00:05:40.830 13:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.830 ************************************ 00:05:40.830 END TEST accel_wrong_workload 00:05:40.830 ************************************ 00:05:40.830 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.090 13:09:55 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.090 13:09:55 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:41.090 13:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.090 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.090 ************************************ 00:05:41.090 START TEST accel_negative_buffers 00:05:41.090 ************************************ 00:05:41.090 13:09:55 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.090 13:09:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:41.090 13:09:55 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.090 13:09:55 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:41.090 13:09:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.090 13:09:55 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:41.090 13:09:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.090 13:09:55 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.090 13:09:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.090 13:09:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.090 13:09:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.090 13:09:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.090 13:09:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.090 13:09:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.090 13:09:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.090 13:09:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.090 13:09:55 -- accel/accel.sh@42 -- # jq -r . 00:05:41.090 -x option must be non-negative. 00:05:41.090 [2024-12-16 13:09:55.467187] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.090 accel_perf options: 00:05:41.090 [-h help message] 00:05:41.090 [-q queue depth per core] 00:05:41.090 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.090 [-T number of threads per core 00:05:41.090 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.090 [-t time in seconds] 00:05:41.090 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.090 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.090 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.090 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.090 [-S for crc32c workload, use this seed value (default 0) 00:05:41.090 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.090 [-f for fill workload, use this BYTE value (default 255) 00:05:41.090 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.090 [-y verify result if this switch is on] 00:05:41.090 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.090 Can be used to spread operations across a wider range of memory. 00:05:41.090 13:09:55 -- common/autotest_common.sh@653 -- # es=1 00:05:41.090 13:09:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.090 13:09:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.090 13:09:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.090 00:05:41.090 real 0m0.058s 00:05:41.090 user 0m0.053s 00:05:41.090 sys 0m0.031s 00:05:41.090 13:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.090 ************************************ 00:05:41.090 END TEST accel_negative_buffers 00:05:41.090 ************************************ 00:05:41.090 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.090 13:09:55 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.090 13:09:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:41.090 13:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.090 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.090 ************************************ 00:05:41.090 START TEST accel_crc32c 00:05:41.090 ************************************ 00:05:41.090 13:09:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.090 13:09:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.090 13:09:55 -- accel/accel.sh@17 -- # local accel_module 00:05:41.090 13:09:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.090 13:09:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.090 13:09:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.090 13:09:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.090 13:09:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.090 13:09:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.090 13:09:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.090 13:09:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.090 13:09:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.090 13:09:55 -- accel/accel.sh@42 -- # jq -r . 00:05:41.090 [2024-12-16 13:09:55.571808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.090 [2024-12-16 13:09:55.571910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58453 ] 00:05:41.349 [2024-12-16 13:09:55.721086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.349 [2024-12-16 13:09:55.860775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.249 13:09:57 -- accel/accel.sh@18 -- # out=' 00:05:43.249 SPDK Configuration: 00:05:43.249 Core mask: 0x1 00:05:43.249 00:05:43.249 Accel Perf Configuration: 00:05:43.249 Workload Type: crc32c 00:05:43.249 CRC-32C seed: 32 00:05:43.249 Transfer size: 4096 bytes 00:05:43.249 Vector count 1 00:05:43.249 Module: software 00:05:43.249 Queue depth: 32 00:05:43.249 Allocate depth: 32 00:05:43.249 # threads/core: 1 00:05:43.249 Run time: 1 seconds 00:05:43.249 Verify: Yes 00:05:43.249 00:05:43.249 Running for 1 seconds... 00:05:43.249 00:05:43.249 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.249 ------------------------------------------------------------------------------------ 00:05:43.249 0,0 604640/s 2361 MiB/s 0 0 00:05:43.249 ==================================================================================== 00:05:43.249 Total 604640/s 2361 MiB/s 0 0' 00:05:43.249 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.249 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.249 13:09:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:43.249 13:09:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:43.249 13:09:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.249 13:09:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.249 13:09:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.249 13:09:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.249 13:09:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.249 13:09:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.249 13:09:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.249 13:09:57 -- accel/accel.sh@42 -- # jq -r . 00:05:43.249 [2024-12-16 13:09:57.469489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.249 [2024-12-16 13:09:57.469569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58472 ] 00:05:43.249 [2024-12-16 13:09:57.609886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.249 [2024-12-16 13:09:57.749252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=0x1 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=crc32c 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=32 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=software 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=32 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=32 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=1 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val=Yes 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.508 13:09:57 -- accel/accel.sh@21 -- # val= 00:05:43.508 13:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.508 13:09:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@21 -- # val= 00:05:44.933 13:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@21 -- # val= 00:05:44.933 13:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@21 -- # val= 00:05:44.933 13:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@21 -- # val= 00:05:44.933 13:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@21 -- # val= 00:05:44.933 13:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@21 -- # val= 00:05:44.933 13:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.933 13:09:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.933 13:09:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.933 13:09:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:44.933 13:09:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.933 00:05:44.933 real 0m3.787s 00:05:44.933 user 0m3.352s 00:05:44.933 sys 0m0.234s 00:05:44.933 ************************************ 00:05:44.933 END TEST accel_crc32c 00:05:44.933 ************************************ 00:05:44.933 13:09:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.933 13:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 13:09:59 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:44.933 13:09:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:44.933 13:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.933 13:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 ************************************ 00:05:44.933 START TEST accel_crc32c_C2 00:05:44.933 ************************************ 00:05:44.933 13:09:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:44.933 13:09:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.933 13:09:59 -- accel/accel.sh@17 -- # local accel_module 00:05:44.933 13:09:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:44.933 13:09:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:44.933 13:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.933 13:09:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.933 13:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.933 13:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.933 13:09:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.933 13:09:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.933 13:09:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.933 13:09:59 -- accel/accel.sh@42 -- # jq -r . 00:05:44.933 [2024-12-16 13:09:59.421505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.933 [2024-12-16 13:09:59.421611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58509 ] 00:05:45.212 [2024-12-16 13:09:59.565857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.212 [2024-12-16 13:09:59.704771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.113 13:10:01 -- accel/accel.sh@18 -- # out=' 00:05:47.113 SPDK Configuration: 00:05:47.113 Core mask: 0x1 00:05:47.113 00:05:47.113 Accel Perf Configuration: 00:05:47.113 Workload Type: crc32c 00:05:47.113 CRC-32C seed: 0 00:05:47.113 Transfer size: 4096 bytes 00:05:47.113 Vector count 2 00:05:47.113 Module: software 00:05:47.113 Queue depth: 32 00:05:47.113 Allocate depth: 32 00:05:47.113 # threads/core: 1 00:05:47.113 Run time: 1 seconds 00:05:47.113 Verify: Yes 00:05:47.113 00:05:47.113 Running for 1 seconds... 00:05:47.113 00:05:47.113 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.113 ------------------------------------------------------------------------------------ 00:05:47.113 0,0 508736/s 3974 MiB/s 0 0 00:05:47.113 ==================================================================================== 00:05:47.113 Total 508736/s 1987 MiB/s 0 0' 00:05:47.113 13:10:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:47.113 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.113 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.113 13:10:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:47.113 13:10:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.113 13:10:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.113 13:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.114 13:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.114 13:10:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.114 13:10:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.114 13:10:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.114 13:10:01 -- accel/accel.sh@42 -- # jq -r . 00:05:47.114 [2024-12-16 13:10:01.319136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.114 [2024-12-16 13:10:01.319250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58535 ] 00:05:47.114 [2024-12-16 13:10:01.465216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.114 [2024-12-16 13:10:01.641154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=0x1 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=crc32c 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=0 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=software 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=32 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=32 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=1 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val=Yes 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.375 13:10:01 -- accel/accel.sh@21 -- # val= 00:05:47.375 13:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.375 13:10:01 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@21 -- # val= 00:05:49.284 13:10:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@21 -- # val= 00:05:49.284 13:10:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@21 -- # val= 00:05:49.284 13:10:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@21 -- # val= 00:05:49.284 13:10:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@21 -- # val= 00:05:49.284 13:10:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@21 -- # val= 00:05:49.284 13:10:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.284 13:10:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.284 13:10:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.284 13:10:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:49.284 13:10:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.284 00:05:49.284 real 0m3.975s 00:05:49.284 user 0m3.539s 00:05:49.284 sys 0m0.229s 00:05:49.284 13:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.284 13:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.284 ************************************ 00:05:49.284 END TEST accel_crc32c_C2 00:05:49.284 ************************************ 00:05:49.284 13:10:03 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:49.284 13:10:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:49.284 13:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.284 13:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.284 ************************************ 00:05:49.284 START TEST accel_copy 00:05:49.284 ************************************ 00:05:49.284 13:10:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:49.284 13:10:03 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.284 13:10:03 -- accel/accel.sh@17 -- # local accel_module 00:05:49.284 13:10:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:49.284 13:10:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:49.284 13:10:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.284 13:10:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.284 13:10:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.284 13:10:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.284 13:10:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.284 13:10:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.284 13:10:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.284 13:10:03 -- accel/accel.sh@42 -- # jq -r . 00:05:49.284 [2024-12-16 13:10:03.458862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.284 [2024-12-16 13:10:03.458965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58576 ] 00:05:49.284 [2024-12-16 13:10:03.604588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.284 [2024-12-16 13:10:03.745786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.184 13:10:05 -- accel/accel.sh@18 -- # out=' 00:05:51.184 SPDK Configuration: 00:05:51.184 Core mask: 0x1 00:05:51.184 00:05:51.184 Accel Perf Configuration: 00:05:51.184 Workload Type: copy 00:05:51.184 Transfer size: 4096 bytes 00:05:51.184 Vector count 1 00:05:51.184 Module: software 00:05:51.184 Queue depth: 32 00:05:51.184 Allocate depth: 32 00:05:51.184 # threads/core: 1 00:05:51.184 Run time: 1 seconds 00:05:51.184 Verify: Yes 00:05:51.184 00:05:51.184 Running for 1 seconds... 00:05:51.184 00:05:51.184 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.184 ------------------------------------------------------------------------------------ 00:05:51.184 0,0 374720/s 1463 MiB/s 0 0 00:05:51.184 ==================================================================================== 00:05:51.184 Total 374720/s 1463 MiB/s 0 0' 00:05:51.184 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.184 13:10:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:51.184 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:51.185 13:10:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.185 13:10:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.185 13:10:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.185 13:10:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.185 13:10:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.185 13:10:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.185 13:10:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.185 13:10:05 -- accel/accel.sh@42 -- # jq -r . 00:05:51.185 [2024-12-16 13:10:05.349859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.185 [2024-12-16 13:10:05.349942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58602 ] 00:05:51.185 [2024-12-16 13:10:05.490262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.185 [2024-12-16 13:10:05.628471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=0x1 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=copy 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=software 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=32 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=32 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=1 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val=Yes 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.185 13:10:05 -- accel/accel.sh@21 -- # val= 00:05:51.185 13:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.185 13:10:05 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@21 -- # val= 00:05:53.085 13:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@21 -- # val= 00:05:53.085 13:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@21 -- # val= 00:05:53.085 13:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@21 -- # val= 00:05:53.085 13:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@21 -- # val= 00:05:53.085 13:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@21 -- # val= 00:05:53.085 13:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.085 13:10:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.085 13:10:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:53.085 13:10:07 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:53.085 13:10:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.085 00:05:53.085 real 0m3.781s 00:05:53.085 user 0m3.358s 00:05:53.085 sys 0m0.221s 00:05:53.085 ************************************ 00:05:53.085 END TEST accel_copy 00:05:53.085 13:10:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.085 13:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.085 ************************************ 00:05:53.085 13:10:07 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.085 13:10:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:53.085 13:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.085 13:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.085 ************************************ 00:05:53.085 START TEST accel_fill 00:05:53.085 ************************************ 00:05:53.085 13:10:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.085 13:10:07 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.085 13:10:07 -- accel/accel.sh@17 -- # local accel_module 00:05:53.085 13:10:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.085 13:10:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.085 13:10:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.085 13:10:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.085 13:10:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.085 13:10:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.085 13:10:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.085 13:10:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.085 13:10:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.085 13:10:07 -- accel/accel.sh@42 -- # jq -r . 00:05:53.085 [2024-12-16 13:10:07.290663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.085 [2024-12-16 13:10:07.290766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58643 ] 00:05:53.085 [2024-12-16 13:10:07.437606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.085 [2024-12-16 13:10:07.575763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.986 13:10:09 -- accel/accel.sh@18 -- # out=' 00:05:54.986 SPDK Configuration: 00:05:54.986 Core mask: 0x1 00:05:54.986 00:05:54.986 Accel Perf Configuration: 00:05:54.986 Workload Type: fill 00:05:54.986 Fill pattern: 0x80 00:05:54.986 Transfer size: 4096 bytes 00:05:54.986 Vector count 1 00:05:54.986 Module: software 00:05:54.986 Queue depth: 64 00:05:54.986 Allocate depth: 64 00:05:54.986 # threads/core: 1 00:05:54.986 Run time: 1 seconds 00:05:54.986 Verify: Yes 00:05:54.986 00:05:54.986 Running for 1 seconds... 00:05:54.986 00:05:54.986 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.986 ------------------------------------------------------------------------------------ 00:05:54.986 0,0 602432/s 2353 MiB/s 0 0 00:05:54.986 ==================================================================================== 00:05:54.986 Total 602432/s 2353 MiB/s 0 0' 00:05:54.986 13:10:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.986 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.986 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.987 13:10:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.987 13:10:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.987 13:10:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.987 13:10:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.987 13:10:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.987 13:10:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.987 13:10:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.987 13:10:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.987 13:10:09 -- accel/accel.sh@42 -- # jq -r . 00:05:54.987 [2024-12-16 13:10:09.184645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.987 [2024-12-16 13:10:09.184750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58669 ] 00:05:54.987 [2024-12-16 13:10:09.334846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.987 [2024-12-16 13:10:09.506506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=0x1 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=fill 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=0x80 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=software 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=64 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=64 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=1 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val=Yes 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.249 13:10:09 -- accel/accel.sh@21 -- # val= 00:05:55.249 13:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.249 13:10:09 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@21 -- # val= 00:05:56.634 13:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@21 -- # val= 00:05:56.634 13:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@21 -- # val= 00:05:56.634 13:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@21 -- # val= 00:05:56.634 13:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@21 -- # val= 00:05:56.634 13:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@21 -- # val= 00:05:56.634 13:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.634 13:10:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.634 13:10:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.634 13:10:11 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:56.634 13:10:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.634 00:05:56.634 real 0m3.855s 00:05:56.634 user 0m3.410s 00:05:56.634 sys 0m0.238s 00:05:56.634 13:10:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.634 ************************************ 00:05:56.634 END TEST accel_fill 00:05:56.634 ************************************ 00:05:56.634 13:10:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.634 13:10:11 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:56.634 13:10:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:56.634 13:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.634 13:10:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.634 ************************************ 00:05:56.634 START TEST accel_copy_crc32c 00:05:56.634 ************************************ 00:05:56.634 13:10:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:56.634 13:10:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.634 13:10:11 -- accel/accel.sh@17 -- # local accel_module 00:05:56.634 13:10:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:56.634 13:10:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:56.634 13:10:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.634 13:10:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.634 13:10:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.634 13:10:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.634 13:10:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.634 13:10:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.634 13:10:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.634 13:10:11 -- accel/accel.sh@42 -- # jq -r . 00:05:56.634 [2024-12-16 13:10:11.198361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.635 [2024-12-16 13:10:11.198465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58710 ] 00:05:56.893 [2024-12-16 13:10:11.345804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.152 [2024-12-16 13:10:11.483497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.527 13:10:13 -- accel/accel.sh@18 -- # out=' 00:05:58.527 SPDK Configuration: 00:05:58.527 Core mask: 0x1 00:05:58.527 00:05:58.527 Accel Perf Configuration: 00:05:58.527 Workload Type: copy_crc32c 00:05:58.527 CRC-32C seed: 0 00:05:58.527 Vector size: 4096 bytes 00:05:58.527 Transfer size: 4096 bytes 00:05:58.527 Vector count 1 00:05:58.527 Module: software 00:05:58.527 Queue depth: 32 00:05:58.527 Allocate depth: 32 00:05:58.527 # threads/core: 1 00:05:58.527 Run time: 1 seconds 00:05:58.527 Verify: Yes 00:05:58.527 00:05:58.527 Running for 1 seconds... 00:05:58.527 00:05:58.527 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.527 ------------------------------------------------------------------------------------ 00:05:58.527 0,0 311840/s 1218 MiB/s 0 0 00:05:58.527 ==================================================================================== 00:05:58.527 Total 311840/s 1218 MiB/s 0 0' 00:05:58.527 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.527 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.527 13:10:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:58.527 13:10:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:58.527 13:10:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.527 13:10:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.527 13:10:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.527 13:10:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.527 13:10:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.527 13:10:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.527 13:10:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.527 13:10:13 -- accel/accel.sh@42 -- # jq -r . 00:05:58.527 [2024-12-16 13:10:13.090260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.527 [2024-12-16 13:10:13.090363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58736 ] 00:05:58.787 [2024-12-16 13:10:13.237500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.045 [2024-12-16 13:10:13.374332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=0x1 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=0 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=software 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=32 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=32 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=1 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val=Yes 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.045 13:10:13 -- accel/accel.sh@21 -- # val= 00:05:59.045 13:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.045 13:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:00.421 13:10:14 -- accel/accel.sh@21 -- # val= 00:06:00.421 13:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.421 13:10:14 -- accel/accel.sh@21 -- # val= 00:06:00.421 13:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.421 13:10:14 -- accel/accel.sh@21 -- # val= 00:06:00.421 13:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.421 13:10:14 -- accel/accel.sh@21 -- # val= 00:06:00.421 13:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.421 13:10:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.421 13:10:14 -- accel/accel.sh@21 -- # val= 00:06:00.422 13:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.422 13:10:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.422 13:10:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.422 13:10:14 -- accel/accel.sh@21 -- # val= 00:06:00.422 13:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.422 13:10:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.422 13:10:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.422 13:10:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.422 13:10:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:00.422 13:10:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.422 00:06:00.422 real 0m3.790s 00:06:00.422 user 0m3.361s 00:06:00.422 sys 0m0.223s 00:06:00.422 13:10:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.422 ************************************ 00:06:00.422 END TEST accel_copy_crc32c 00:06:00.422 ************************************ 00:06:00.422 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.688 13:10:14 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.688 13:10:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:00.688 13:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.688 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.688 ************************************ 00:06:00.688 START TEST accel_copy_crc32c_C2 00:06:00.688 ************************************ 00:06:00.688 13:10:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.688 13:10:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.688 13:10:14 -- accel/accel.sh@17 -- # local accel_module 00:06:00.688 13:10:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:00.688 13:10:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:00.688 13:10:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.688 13:10:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.688 13:10:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.688 13:10:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.688 13:10:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.688 13:10:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.688 13:10:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.688 13:10:15 -- accel/accel.sh@42 -- # jq -r . 00:06:00.688 [2024-12-16 13:10:15.032486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.689 [2024-12-16 13:10:15.032587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58772 ] 00:06:00.689 [2024-12-16 13:10:15.178401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.969 [2024-12-16 13:10:15.317907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.344 13:10:16 -- accel/accel.sh@18 -- # out=' 00:06:02.344 SPDK Configuration: 00:06:02.344 Core mask: 0x1 00:06:02.344 00:06:02.344 Accel Perf Configuration: 00:06:02.344 Workload Type: copy_crc32c 00:06:02.344 CRC-32C seed: 0 00:06:02.344 Vector size: 4096 bytes 00:06:02.344 Transfer size: 8192 bytes 00:06:02.344 Vector count 2 00:06:02.344 Module: software 00:06:02.344 Queue depth: 32 00:06:02.344 Allocate depth: 32 00:06:02.344 # threads/core: 1 00:06:02.344 Run time: 1 seconds 00:06:02.344 Verify: Yes 00:06:02.344 00:06:02.344 Running for 1 seconds... 00:06:02.344 00:06:02.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.344 ------------------------------------------------------------------------------------ 00:06:02.344 0,0 232832/s 1819 MiB/s 0 0 00:06:02.344 ==================================================================================== 00:06:02.344 Total 232832/s 909 MiB/s 0 0' 00:06:02.344 13:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.344 13:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.344 13:10:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:02.344 13:10:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:02.344 13:10:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.344 13:10:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.344 13:10:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.344 13:10:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.344 13:10:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.344 13:10:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.344 13:10:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.344 13:10:16 -- accel/accel.sh@42 -- # jq -r . 00:06:02.605 [2024-12-16 13:10:16.925530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.605 [2024-12-16 13:10:16.925640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58798 ] 00:06:02.605 [2024-12-16 13:10:17.073336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.866 [2024-12-16 13:10:17.250297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.866 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.866 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.866 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.866 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.866 13:10:17 -- accel/accel.sh@21 -- # val=0x1 00:06:02.866 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.866 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.866 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.866 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=0 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=software 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=32 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=32 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=1 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val=Yes 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.867 13:10:17 -- accel/accel.sh@21 -- # val= 00:06:02.867 13:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.867 13:10:17 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@21 -- # val= 00:06:04.775 13:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@21 -- # val= 00:06:04.775 13:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@21 -- # val= 00:06:04.775 13:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@21 -- # val= 00:06:04.775 13:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@21 -- # val= 00:06:04.775 13:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@21 -- # val= 00:06:04.775 13:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.775 13:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.775 13:10:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.775 13:10:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:04.775 13:10:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.775 00:06:04.775 real 0m3.866s 00:06:04.775 user 0m3.412s 00:06:04.775 sys 0m0.241s 00:06:04.775 13:10:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.775 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:06:04.775 ************************************ 00:06:04.775 END TEST accel_copy_crc32c_C2 00:06:04.775 ************************************ 00:06:04.775 13:10:18 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:04.775 13:10:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.775 13:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.775 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:06:04.775 ************************************ 00:06:04.775 START TEST accel_dualcast 00:06:04.775 ************************************ 00:06:04.775 13:10:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:04.775 13:10:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.775 13:10:18 -- accel/accel.sh@17 -- # local accel_module 00:06:04.775 13:10:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:04.775 13:10:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:04.775 13:10:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.775 13:10:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.775 13:10:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.775 13:10:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.775 13:10:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.775 13:10:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.775 13:10:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.775 13:10:18 -- accel/accel.sh@42 -- # jq -r . 00:06:04.775 [2024-12-16 13:10:18.954733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.775 [2024-12-16 13:10:18.954809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:06:04.775 [2024-12-16 13:10:19.088643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.775 [2024-12-16 13:10:19.258318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.675 13:10:20 -- accel/accel.sh@18 -- # out=' 00:06:06.675 SPDK Configuration: 00:06:06.675 Core mask: 0x1 00:06:06.675 00:06:06.675 Accel Perf Configuration: 00:06:06.675 Workload Type: dualcast 00:06:06.675 Transfer size: 4096 bytes 00:06:06.675 Vector count 1 00:06:06.675 Module: software 00:06:06.675 Queue depth: 32 00:06:06.675 Allocate depth: 32 00:06:06.675 # threads/core: 1 00:06:06.675 Run time: 1 seconds 00:06:06.675 Verify: Yes 00:06:06.675 00:06:06.675 Running for 1 seconds... 00:06:06.675 00:06:06.675 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.675 ------------------------------------------------------------------------------------ 00:06:06.675 0,0 335648/s 1311 MiB/s 0 0 00:06:06.675 ==================================================================================== 00:06:06.675 Total 335648/s 1311 MiB/s 0 0' 00:06:06.675 13:10:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.675 13:10:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.675 13:10:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:06.675 13:10:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:06.675 13:10:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.675 13:10:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.675 13:10:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.675 13:10:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.675 13:10:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.675 13:10:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.675 13:10:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.675 13:10:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.675 [2024-12-16 13:10:20.960992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.675 [2024-12-16 13:10:20.961071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58859 ] 00:06:06.675 [2024-12-16 13:10:21.098674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.675 [2024-12-16 13:10:21.235720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=0x1 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=dualcast 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=software 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=32 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=32 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=1 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val=Yes 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.934 13:10:21 -- accel/accel.sh@21 -- # val= 00:06:06.934 13:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.934 13:10:21 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@21 -- # val= 00:06:08.310 13:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@21 -- # val= 00:06:08.310 13:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@21 -- # val= 00:06:08.310 13:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@21 -- # val= 00:06:08.310 13:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@21 -- # val= 00:06:08.310 13:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@21 -- # val= 00:06:08.310 13:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.310 13:10:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.310 13:10:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.310 13:10:22 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:08.310 13:10:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.310 00:06:08.310 real 0m3.886s 00:06:08.310 user 0m3.459s 00:06:08.310 sys 0m0.220s 00:06:08.310 ************************************ 00:06:08.310 END TEST accel_dualcast 00:06:08.310 ************************************ 00:06:08.310 13:10:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.310 13:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.310 13:10:22 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:08.310 13:10:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.310 13:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.310 13:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.310 ************************************ 00:06:08.310 START TEST accel_compare 00:06:08.310 ************************************ 00:06:08.310 13:10:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:08.310 13:10:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.310 13:10:22 -- accel/accel.sh@17 -- # local accel_module 00:06:08.310 13:10:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:08.310 13:10:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.310 13:10:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:08.310 13:10:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.310 13:10:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.310 13:10:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.310 13:10:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.310 13:10:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.310 13:10:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.310 13:10:22 -- accel/accel.sh@42 -- # jq -r . 00:06:08.310 [2024-12-16 13:10:22.877906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.310 [2024-12-16 13:10:22.878009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:06:08.568 [2024-12-16 13:10:23.022233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.827 [2024-12-16 13:10:23.160217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.202 13:10:24 -- accel/accel.sh@18 -- # out=' 00:06:10.202 SPDK Configuration: 00:06:10.202 Core mask: 0x1 00:06:10.202 00:06:10.203 Accel Perf Configuration: 00:06:10.203 Workload Type: compare 00:06:10.203 Transfer size: 4096 bytes 00:06:10.203 Vector count 1 00:06:10.203 Module: software 00:06:10.203 Queue depth: 32 00:06:10.203 Allocate depth: 32 00:06:10.203 # threads/core: 1 00:06:10.203 Run time: 1 seconds 00:06:10.203 Verify: Yes 00:06:10.203 00:06:10.203 Running for 1 seconds... 00:06:10.203 00:06:10.203 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.203 ------------------------------------------------------------------------------------ 00:06:10.203 0,0 565088/s 2207 MiB/s 0 0 00:06:10.203 ==================================================================================== 00:06:10.203 Total 565088/s 2207 MiB/s 0 0' 00:06:10.203 13:10:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.203 13:10:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.203 13:10:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:10.203 13:10:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:10.203 13:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.203 13:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.203 13:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.203 13:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.203 13:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.203 13:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.203 13:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.203 13:10:24 -- accel/accel.sh@42 -- # jq -r . 00:06:10.203 [2024-12-16 13:10:24.764714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.203 [2024-12-16 13:10:24.764796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58926 ] 00:06:10.461 [2024-12-16 13:10:24.905869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.718 [2024-12-16 13:10:25.049278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=0x1 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=compare 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=software 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=32 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=32 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=1 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val=Yes 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.718 13:10:25 -- accel/accel.sh@21 -- # val= 00:06:10.718 13:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.718 13:10:25 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@21 -- # val= 00:06:12.092 13:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@21 -- # val= 00:06:12.092 13:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@21 -- # val= 00:06:12.092 13:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@21 -- # val= 00:06:12.092 13:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@21 -- # val= 00:06:12.092 13:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@21 -- # val= 00:06:12.092 13:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.092 13:10:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.092 13:10:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.092 13:10:26 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:12.092 13:10:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.092 00:06:12.092 real 0m3.786s 00:06:12.092 user 0m3.368s 00:06:12.092 sys 0m0.216s 00:06:12.092 13:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.092 ************************************ 00:06:12.092 END TEST accel_compare 00:06:12.092 ************************************ 00:06:12.092 13:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.092 13:10:26 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:12.092 13:10:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.092 13:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.092 13:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.350 ************************************ 00:06:12.350 START TEST accel_xor 00:06:12.350 ************************************ 00:06:12.350 13:10:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:12.350 13:10:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.350 13:10:26 -- accel/accel.sh@17 -- # local accel_module 00:06:12.350 13:10:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:12.350 13:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.350 13:10:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:12.350 13:10:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.350 13:10:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.350 13:10:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.350 13:10:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.350 13:10:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.350 13:10:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.350 13:10:26 -- accel/accel.sh@42 -- # jq -r . 00:06:12.350 [2024-12-16 13:10:26.701364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.351 [2024-12-16 13:10:26.701466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:06:12.351 [2024-12-16 13:10:26.848608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.610 [2024-12-16 13:10:27.020049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.555 13:10:28 -- accel/accel.sh@18 -- # out=' 00:06:14.555 SPDK Configuration: 00:06:14.555 Core mask: 0x1 00:06:14.555 00:06:14.555 Accel Perf Configuration: 00:06:14.555 Workload Type: xor 00:06:14.555 Source buffers: 2 00:06:14.555 Transfer size: 4096 bytes 00:06:14.555 Vector count 1 00:06:14.555 Module: software 00:06:14.555 Queue depth: 32 00:06:14.555 Allocate depth: 32 00:06:14.555 # threads/core: 1 00:06:14.555 Run time: 1 seconds 00:06:14.555 Verify: Yes 00:06:14.555 00:06:14.555 Running for 1 seconds... 00:06:14.555 00:06:14.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.555 ------------------------------------------------------------------------------------ 00:06:14.555 0,0 339392/s 1325 MiB/s 0 0 00:06:14.555 ==================================================================================== 00:06:14.555 Total 339392/s 1325 MiB/s 0 0' 00:06:14.555 13:10:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.555 13:10:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.555 13:10:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:14.555 13:10:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:14.555 13:10:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.555 13:10:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.555 13:10:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.555 13:10:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.555 13:10:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.555 13:10:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.555 13:10:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.555 13:10:28 -- accel/accel.sh@42 -- # jq -r . 00:06:14.555 [2024-12-16 13:10:28.768159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.555 [2024-12-16 13:10:28.768239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58995 ] 00:06:14.555 [2024-12-16 13:10:28.906672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.555 [2024-12-16 13:10:29.043149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=0x1 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=xor 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=2 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=software 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=32 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=32 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=1 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val=Yes 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.814 13:10:29 -- accel/accel.sh@21 -- # val= 00:06:14.814 13:10:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.814 13:10:29 -- accel/accel.sh@20 -- # read -r var val 00:06:16.188 13:10:30 -- accel/accel.sh@21 -- # val= 00:06:16.188 13:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.188 13:10:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.188 13:10:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.188 13:10:30 -- accel/accel.sh@21 -- # val= 00:06:16.188 13:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.188 13:10:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.188 13:10:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.188 13:10:30 -- accel/accel.sh@21 -- # val= 00:06:16.188 13:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.188 13:10:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.189 13:10:30 -- accel/accel.sh@21 -- # val= 00:06:16.189 13:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.189 13:10:30 -- accel/accel.sh@21 -- # val= 00:06:16.189 13:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.189 13:10:30 -- accel/accel.sh@21 -- # val= 00:06:16.189 13:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.189 13:10:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.189 13:10:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.189 13:10:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:16.189 13:10:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.189 00:06:16.189 real 0m3.948s 00:06:16.189 user 0m3.523s 00:06:16.189 sys 0m0.222s 00:06:16.189 13:10:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.189 ************************************ 00:06:16.189 END TEST accel_xor 00:06:16.189 ************************************ 00:06:16.189 13:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.189 13:10:30 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:16.189 13:10:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.189 13:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.189 13:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.189 ************************************ 00:06:16.189 START TEST accel_xor 00:06:16.189 ************************************ 00:06:16.189 13:10:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:16.189 13:10:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.189 13:10:30 -- accel/accel.sh@17 -- # local accel_module 00:06:16.189 13:10:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.189 13:10:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.189 13:10:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.189 13:10:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.189 13:10:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.189 13:10:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.189 13:10:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.189 13:10:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.189 13:10:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.189 13:10:30 -- accel/accel.sh@42 -- # jq -r . 00:06:16.189 [2024-12-16 13:10:30.684724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.189 [2024-12-16 13:10:30.684803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:06:16.447 [2024-12-16 13:10:30.826400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.447 [2024-12-16 13:10:30.998619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.349 13:10:32 -- accel/accel.sh@18 -- # out=' 00:06:18.349 SPDK Configuration: 00:06:18.349 Core mask: 0x1 00:06:18.349 00:06:18.349 Accel Perf Configuration: 00:06:18.349 Workload Type: xor 00:06:18.349 Source buffers: 3 00:06:18.349 Transfer size: 4096 bytes 00:06:18.349 Vector count 1 00:06:18.349 Module: software 00:06:18.349 Queue depth: 32 00:06:18.349 Allocate depth: 32 00:06:18.349 # threads/core: 1 00:06:18.349 Run time: 1 seconds 00:06:18.349 Verify: Yes 00:06:18.349 00:06:18.349 Running for 1 seconds... 00:06:18.349 00:06:18.349 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.349 ------------------------------------------------------------------------------------ 00:06:18.349 0,0 348800/s 1362 MiB/s 0 0 00:06:18.349 ==================================================================================== 00:06:18.349 Total 348800/s 1362 MiB/s 0 0' 00:06:18.349 13:10:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.349 13:10:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.349 13:10:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:18.349 13:10:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:18.349 13:10:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.349 13:10:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.349 13:10:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.349 13:10:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.349 13:10:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.349 13:10:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.349 13:10:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.349 13:10:32 -- accel/accel.sh@42 -- # jq -r . 00:06:18.349 [2024-12-16 13:10:32.644143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.349 [2024-12-16 13:10:32.644221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:06:18.349 [2024-12-16 13:10:32.785369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.608 [2024-12-16 13:10:32.925504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=0x1 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=xor 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=3 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=software 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=32 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=32 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=1 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val=Yes 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.608 13:10:33 -- accel/accel.sh@21 -- # val= 00:06:18.608 13:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.608 13:10:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@21 -- # val= 00:06:19.985 13:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@21 -- # val= 00:06:19.985 13:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@21 -- # val= 00:06:19.985 13:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@21 -- # val= 00:06:19.985 13:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@21 -- # val= 00:06:19.985 13:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@21 -- # val= 00:06:19.985 13:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.985 13:10:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.985 13:10:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.985 13:10:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:19.985 13:10:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.985 00:06:19.985 real 0m3.849s 00:06:19.985 user 0m3.409s 00:06:19.985 sys 0m0.228s 00:06:19.985 ************************************ 00:06:19.985 END TEST accel_xor 00:06:19.985 ************************************ 00:06:19.985 13:10:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.985 13:10:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.985 13:10:34 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:19.985 13:10:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:19.985 13:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.985 13:10:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.985 ************************************ 00:06:19.985 START TEST accel_dif_verify 00:06:19.985 ************************************ 00:06:19.985 13:10:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:19.985 13:10:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.985 13:10:34 -- accel/accel.sh@17 -- # local accel_module 00:06:19.985 13:10:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:19.985 13:10:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:19.985 13:10:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.985 13:10:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.985 13:10:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.985 13:10:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.985 13:10:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.985 13:10:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.985 13:10:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.985 13:10:34 -- accel/accel.sh@42 -- # jq -r . 00:06:20.243 [2024-12-16 13:10:34.574652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.243 [2024-12-16 13:10:34.574751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ] 00:06:20.244 [2024-12-16 13:10:34.720827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.501 [2024-12-16 13:10:34.859891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.877 13:10:36 -- accel/accel.sh@18 -- # out=' 00:06:21.877 SPDK Configuration: 00:06:21.877 Core mask: 0x1 00:06:21.877 00:06:21.877 Accel Perf Configuration: 00:06:21.877 Workload Type: dif_verify 00:06:21.877 Vector size: 4096 bytes 00:06:21.877 Transfer size: 4096 bytes 00:06:21.877 Block size: 512 bytes 00:06:21.877 Metadata size: 8 bytes 00:06:21.877 Vector count 1 00:06:21.877 Module: software 00:06:21.877 Queue depth: 32 00:06:21.877 Allocate depth: 32 00:06:21.877 # threads/core: 1 00:06:21.877 Run time: 1 seconds 00:06:21.877 Verify: No 00:06:21.877 00:06:21.877 Running for 1 seconds... 00:06:21.877 00:06:21.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.877 ------------------------------------------------------------------------------------ 00:06:21.877 0,0 128960/s 511 MiB/s 0 0 00:06:21.877 ==================================================================================== 00:06:21.877 Total 128960/s 503 MiB/s 0 0' 00:06:21.877 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.877 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.877 13:10:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:21.877 13:10:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:21.877 13:10:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.877 13:10:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.877 13:10:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.877 13:10:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.877 13:10:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.877 13:10:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.877 13:10:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.877 13:10:36 -- accel/accel.sh@42 -- # jq -r . 00:06:22.136 [2024-12-16 13:10:36.475145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.136 [2024-12-16 13:10:36.475250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59127 ] 00:06:22.136 [2024-12-16 13:10:36.622072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.395 [2024-12-16 13:10:36.760396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=0x1 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=dif_verify 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=software 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=32 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=32 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=1 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val=No 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.395 13:10:36 -- accel/accel.sh@21 -- # val= 00:06:22.395 13:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.395 13:10:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 13:10:38 -- accel/accel.sh@21 -- # val= 00:06:23.771 13:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 13:10:38 -- accel/accel.sh@21 -- # val= 00:06:23.771 13:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 13:10:38 -- accel/accel.sh@21 -- # val= 00:06:23.771 13:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 13:10:38 -- accel/accel.sh@21 -- # val= 00:06:23.771 13:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 13:10:38 -- accel/accel.sh@21 -- # val= 00:06:23.771 13:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 13:10:38 -- accel/accel.sh@21 -- # val= 00:06:23.771 13:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.771 13:10:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.771 ************************************ 00:06:23.771 END TEST accel_dif_verify 00:06:23.771 ************************************ 00:06:23.771 13:10:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.771 13:10:38 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:23.771 13:10:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.771 00:06:23.771 real 0m3.797s 00:06:23.771 user 0m3.377s 00:06:23.771 sys 0m0.218s 00:06:23.771 13:10:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.771 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.030 13:10:38 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:24.030 13:10:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.030 13:10:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.030 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.030 ************************************ 00:06:24.030 START TEST accel_dif_generate 00:06:24.030 ************************************ 00:06:24.030 13:10:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:24.030 13:10:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.030 13:10:38 -- accel/accel.sh@17 -- # local accel_module 00:06:24.030 13:10:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:24.030 13:10:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:24.030 13:10:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.030 13:10:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.030 13:10:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.030 13:10:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.030 13:10:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.030 13:10:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.030 13:10:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.030 13:10:38 -- accel/accel.sh@42 -- # jq -r . 00:06:24.030 [2024-12-16 13:10:38.420964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.030 [2024-12-16 13:10:38.421043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:06:24.030 [2024-12-16 13:10:38.558590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.288 [2024-12-16 13:10:38.697874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.188 13:10:40 -- accel/accel.sh@18 -- # out=' 00:06:26.188 SPDK Configuration: 00:06:26.188 Core mask: 0x1 00:06:26.188 00:06:26.188 Accel Perf Configuration: 00:06:26.188 Workload Type: dif_generate 00:06:26.188 Vector size: 4096 bytes 00:06:26.188 Transfer size: 4096 bytes 00:06:26.188 Block size: 512 bytes 00:06:26.188 Metadata size: 8 bytes 00:06:26.189 Vector count 1 00:06:26.189 Module: software 00:06:26.189 Queue depth: 32 00:06:26.189 Allocate depth: 32 00:06:26.189 # threads/core: 1 00:06:26.189 Run time: 1 seconds 00:06:26.189 Verify: No 00:06:26.189 00:06:26.189 Running for 1 seconds... 00:06:26.189 00:06:26.189 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.189 ------------------------------------------------------------------------------------ 00:06:26.189 0,0 154400/s 612 MiB/s 0 0 00:06:26.189 ==================================================================================== 00:06:26.189 Total 154400/s 603 MiB/s 0 0' 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.189 13:10:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.189 13:10:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.189 13:10:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.189 13:10:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.189 13:10:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.189 13:10:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.189 13:10:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.189 13:10:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.189 13:10:40 -- accel/accel.sh@42 -- # jq -r . 00:06:26.189 [2024-12-16 13:10:40.317333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.189 [2024-12-16 13:10:40.317435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59188 ] 00:06:26.189 [2024-12-16 13:10:40.465906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.189 [2024-12-16 13:10:40.603857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=0x1 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=dif_generate 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=software 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=32 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=32 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=1 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val=No 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.189 13:10:40 -- accel/accel.sh@21 -- # val= 00:06:26.189 13:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.189 13:10:40 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@21 -- # val= 00:06:28.098 13:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@21 -- # val= 00:06:28.098 13:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@21 -- # val= 00:06:28.098 13:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@21 -- # val= 00:06:28.098 13:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@21 -- # val= 00:06:28.098 13:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@21 -- # val= 00:06:28.098 13:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.098 13:10:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.098 13:10:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.098 13:10:42 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:28.098 ************************************ 00:06:28.098 END TEST accel_dif_generate 00:06:28.098 ************************************ 00:06:28.098 13:10:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.098 00:06:28.098 real 0m3.789s 00:06:28.098 user 0m3.356s 00:06:28.098 sys 0m0.233s 00:06:28.098 13:10:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.098 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.098 13:10:42 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.098 13:10:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:28.098 13:10:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.098 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.098 ************************************ 00:06:28.098 START TEST accel_dif_generate_copy 00:06:28.098 ************************************ 00:06:28.098 13:10:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.098 13:10:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.098 13:10:42 -- accel/accel.sh@17 -- # local accel_module 00:06:28.098 13:10:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.098 13:10:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.098 13:10:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.098 13:10:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.098 13:10:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.098 13:10:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.098 13:10:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.098 13:10:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.098 13:10:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.098 13:10:42 -- accel/accel.sh@42 -- # jq -r . 00:06:28.098 [2024-12-16 13:10:42.249237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.098 [2024-12-16 13:10:42.249337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:06:28.098 [2024-12-16 13:10:42.396815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.098 [2024-12-16 13:10:42.566511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.010 13:10:44 -- accel/accel.sh@18 -- # out=' 00:06:30.010 SPDK Configuration: 00:06:30.010 Core mask: 0x1 00:06:30.010 00:06:30.010 Accel Perf Configuration: 00:06:30.010 Workload Type: dif_generate_copy 00:06:30.010 Vector size: 4096 bytes 00:06:30.010 Transfer size: 4096 bytes 00:06:30.010 Vector count 1 00:06:30.010 Module: software 00:06:30.010 Queue depth: 32 00:06:30.010 Allocate depth: 32 00:06:30.010 # threads/core: 1 00:06:30.010 Run time: 1 seconds 00:06:30.010 Verify: No 00:06:30.010 00:06:30.010 Running for 1 seconds... 00:06:30.010 00:06:30.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.010 ------------------------------------------------------------------------------------ 00:06:30.010 0,0 89152/s 353 MiB/s 0 0 00:06:30.010 ==================================================================================== 00:06:30.010 Total 89152/s 348 MiB/s 0 0' 00:06:30.010 13:10:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:30.010 13:10:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:30.010 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.010 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.010 13:10:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.010 13:10:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.010 13:10:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.010 13:10:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.010 13:10:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.010 13:10:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.010 13:10:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.010 13:10:44 -- accel/accel.sh@42 -- # jq -r . 00:06:30.010 [2024-12-16 13:10:44.271711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.010 [2024-12-16 13:10:44.272163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59252 ] 00:06:30.010 [2024-12-16 13:10:44.412416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.010 [2024-12-16 13:10:44.549470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=0x1 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=software 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=32 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=32 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=1 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val=No 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.269 13:10:44 -- accel/accel.sh@21 -- # val= 00:06:30.269 13:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.269 13:10:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 13:10:46 -- accel/accel.sh@21 -- # val= 00:06:31.644 13:10:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 13:10:46 -- accel/accel.sh@21 -- # val= 00:06:31.644 13:10:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 13:10:46 -- accel/accel.sh@21 -- # val= 00:06:31.644 13:10:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 13:10:46 -- accel/accel.sh@21 -- # val= 00:06:31.644 13:10:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 13:10:46 -- accel/accel.sh@21 -- # val= 00:06:31.644 13:10:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 13:10:46 -- accel/accel.sh@21 -- # val= 00:06:31.644 13:10:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 13:10:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 ************************************ 00:06:31.644 END TEST accel_dif_generate_copy 00:06:31.644 ************************************ 00:06:31.644 13:10:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.644 13:10:46 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:31.644 13:10:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.644 00:06:31.644 real 0m3.919s 00:06:31.644 user 0m3.503s 00:06:31.644 sys 0m0.212s 00:06:31.644 13:10:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.644 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:31.644 13:10:46 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:31.644 13:10:46 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.644 13:10:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:31.644 13:10:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.644 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:31.644 ************************************ 00:06:31.644 START TEST accel_comp 00:06:31.644 ************************************ 00:06:31.644 13:10:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.644 13:10:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.644 13:10:46 -- accel/accel.sh@17 -- # local accel_module 00:06:31.644 13:10:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.644 13:10:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.644 13:10:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.644 13:10:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.644 13:10:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.644 13:10:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.644 13:10:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.644 13:10:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.644 13:10:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.644 13:10:46 -- accel/accel.sh@42 -- # jq -r . 00:06:31.644 [2024-12-16 13:10:46.209168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.644 [2024-12-16 13:10:46.209374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:06:31.902 [2024-12-16 13:10:46.354045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.159 [2024-12-16 13:10:46.494322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.534 13:10:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.534 00:06:33.534 SPDK Configuration: 00:06:33.534 Core mask: 0x1 00:06:33.534 00:06:33.534 Accel Perf Configuration: 00:06:33.534 Workload Type: compress 00:06:33.534 Transfer size: 4096 bytes 00:06:33.534 Vector count 1 00:06:33.534 Module: software 00:06:33.534 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.534 Queue depth: 32 00:06:33.534 Allocate depth: 32 00:06:33.534 # threads/core: 1 00:06:33.534 Run time: 1 seconds 00:06:33.534 Verify: No 00:06:33.534 00:06:33.534 Running for 1 seconds... 00:06:33.534 00:06:33.534 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.534 ------------------------------------------------------------------------------------ 00:06:33.534 0,0 64320/s 268 MiB/s 0 0 00:06:33.534 ==================================================================================== 00:06:33.534 Total 64320/s 251 MiB/s 0 0' 00:06:33.534 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.534 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.534 13:10:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.534 13:10:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.534 13:10:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.534 13:10:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.534 13:10:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.534 13:10:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.534 13:10:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.534 13:10:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.534 13:10:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.534 13:10:48 -- accel/accel.sh@42 -- # jq -r . 00:06:33.793 [2024-12-16 13:10:48.115248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.793 [2024-12-16 13:10:48.115351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59317 ] 00:06:33.793 [2024-12-16 13:10:48.262240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.051 [2024-12-16 13:10:48.399319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=0x1 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=compress 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=software 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=32 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=32 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val=1 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.051 13:10:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.051 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.051 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 13:10:48 -- accel/accel.sh@21 -- # val=No 00:06:34.052 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.052 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 13:10:48 -- accel/accel.sh@21 -- # val= 00:06:34.052 13:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 13:10:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 13:10:48 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@21 -- # val= 00:06:35.426 13:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@21 -- # val= 00:06:35.426 13:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@21 -- # val= 00:06:35.426 13:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@21 -- # val= 00:06:35.426 13:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@21 -- # val= 00:06:35.426 13:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@21 -- # val= 00:06:35.426 13:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.426 13:10:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.426 13:10:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.426 13:10:49 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:35.426 13:10:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.426 00:06:35.426 real 0m3.810s 00:06:35.426 user 0m3.373s 00:06:35.426 sys 0m0.229s 00:06:35.426 13:10:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.426 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.426 ************************************ 00:06:35.426 END TEST accel_comp 00:06:35.426 ************************************ 00:06:35.684 13:10:50 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.684 13:10:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.684 13:10:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.684 13:10:50 -- common/autotest_common.sh@10 -- # set +x 00:06:35.684 ************************************ 00:06:35.684 START TEST accel_decomp 00:06:35.684 ************************************ 00:06:35.684 13:10:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.684 13:10:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.684 13:10:50 -- accel/accel.sh@17 -- # local accel_module 00:06:35.684 13:10:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.684 13:10:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.684 13:10:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.684 13:10:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.684 13:10:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.684 13:10:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.684 13:10:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.684 13:10:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.684 13:10:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.684 13:10:50 -- accel/accel.sh@42 -- # jq -r . 00:06:35.684 [2024-12-16 13:10:50.062864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.684 [2024-12-16 13:10:50.062961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59358 ] 00:06:35.684 [2024-12-16 13:10:50.215205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.942 [2024-12-16 13:10:50.354486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.843 13:10:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:37.843 00:06:37.843 SPDK Configuration: 00:06:37.843 Core mask: 0x1 00:06:37.843 00:06:37.843 Accel Perf Configuration: 00:06:37.843 Workload Type: decompress 00:06:37.843 Transfer size: 4096 bytes 00:06:37.843 Vector count 1 00:06:37.843 Module: software 00:06:37.843 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.843 Queue depth: 32 00:06:37.843 Allocate depth: 32 00:06:37.843 # threads/core: 1 00:06:37.843 Run time: 1 seconds 00:06:37.843 Verify: Yes 00:06:37.843 00:06:37.843 Running for 1 seconds... 00:06:37.843 00:06:37.843 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.843 ------------------------------------------------------------------------------------ 00:06:37.843 0,0 62816/s 115 MiB/s 0 0 00:06:37.843 ==================================================================================== 00:06:37.843 Total 62816/s 245 MiB/s 0 0' 00:06:37.843 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:37.843 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:37.843 13:10:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.843 13:10:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.843 13:10:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.843 13:10:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.843 13:10:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.843 13:10:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.843 13:10:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.843 13:10:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.843 13:10:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.843 13:10:52 -- accel/accel.sh@42 -- # jq -r . 00:06:37.843 [2024-12-16 13:10:52.116998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.843 [2024-12-16 13:10:52.117689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59384 ] 00:06:37.843 [2024-12-16 13:10:52.263947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.101 [2024-12-16 13:10:52.434135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=0x1 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=decompress 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=software 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=32 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=32 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=1 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val=Yes 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.101 13:10:52 -- accel/accel.sh@21 -- # val= 00:06:38.101 13:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.101 13:10:52 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 13:10:54 -- accel/accel.sh@21 -- # val= 00:06:40.001 13:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 13:10:54 -- accel/accel.sh@21 -- # val= 00:06:40.001 13:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 13:10:54 -- accel/accel.sh@21 -- # val= 00:06:40.001 13:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 13:10:54 -- accel/accel.sh@21 -- # val= 00:06:40.001 13:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 13:10:54 -- accel/accel.sh@21 -- # val= 00:06:40.001 13:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 13:10:54 -- accel/accel.sh@21 -- # val= 00:06:40.001 13:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # IFS=: 00:06:40.001 13:10:54 -- accel/accel.sh@20 -- # read -r var val 00:06:40.001 ************************************ 00:06:40.001 END TEST accel_decomp 00:06:40.001 ************************************ 00:06:40.001 13:10:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.001 13:10:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.001 13:10:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.001 00:06:40.001 real 0m4.146s 00:06:40.001 user 0m3.696s 00:06:40.001 sys 0m0.236s 00:06:40.001 13:10:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.001 13:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.001 13:10:54 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:40.001 13:10:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:40.001 13:10:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.001 13:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.001 ************************************ 00:06:40.001 START TEST accel_decmop_full 00:06:40.001 ************************************ 00:06:40.001 13:10:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:40.001 13:10:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.001 13:10:54 -- accel/accel.sh@17 -- # local accel_module 00:06:40.001 13:10:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:40.001 13:10:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:40.001 13:10:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.001 13:10:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.001 13:10:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.001 13:10:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.001 13:10:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.001 13:10:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.001 13:10:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.001 13:10:54 -- accel/accel.sh@42 -- # jq -r . 00:06:40.001 [2024-12-16 13:10:54.246834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.001 [2024-12-16 13:10:54.246936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59425 ] 00:06:40.001 [2024-12-16 13:10:54.392791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.001 [2024-12-16 13:10:54.532055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.902 13:10:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:41.902 00:06:41.902 SPDK Configuration: 00:06:41.902 Core mask: 0x1 00:06:41.902 00:06:41.902 Accel Perf Configuration: 00:06:41.902 Workload Type: decompress 00:06:41.902 Transfer size: 111250 bytes 00:06:41.902 Vector count 1 00:06:41.902 Module: software 00:06:41.902 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.902 Queue depth: 32 00:06:41.902 Allocate depth: 32 00:06:41.902 # threads/core: 1 00:06:41.902 Run time: 1 seconds 00:06:41.902 Verify: Yes 00:06:41.902 00:06:41.902 Running for 1 seconds... 00:06:41.902 00:06:41.902 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.902 ------------------------------------------------------------------------------------ 00:06:41.902 0,0 5632/s 232 MiB/s 0 0 00:06:41.902 ==================================================================================== 00:06:41.902 Total 5632/s 597 MiB/s 0 0' 00:06:41.902 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:41.902 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:41.902 13:10:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:41.902 13:10:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:41.902 13:10:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.902 13:10:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.902 13:10:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.902 13:10:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.902 13:10:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.902 13:10:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.902 13:10:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.902 13:10:56 -- accel/accel.sh@42 -- # jq -r . 00:06:41.902 [2024-12-16 13:10:56.153841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.902 [2024-12-16 13:10:56.153945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59451 ] 00:06:41.902 [2024-12-16 13:10:56.301369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.160 [2024-12-16 13:10:56.484230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=0x1 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=decompress 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=software 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=32 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=32 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=1 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val=Yes 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.161 13:10:56 -- accel/accel.sh@21 -- # val= 00:06:42.161 13:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.161 13:10:56 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@21 -- # val= 00:06:44.073 13:10:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@21 -- # val= 00:06:44.073 13:10:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@21 -- # val= 00:06:44.073 13:10:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@21 -- # val= 00:06:44.073 13:10:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@21 -- # val= 00:06:44.073 13:10:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@21 -- # val= 00:06:44.073 13:10:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.073 13:10:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.073 13:10:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.073 13:10:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:44.073 13:10:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.073 00:06:44.073 real 0m4.044s 00:06:44.073 user 0m3.590s 00:06:44.073 sys 0m0.238s 00:06:44.073 13:10:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.073 ************************************ 00:06:44.073 END TEST accel_decmop_full 00:06:44.073 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.073 ************************************ 00:06:44.073 13:10:58 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.073 13:10:58 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:44.073 13:10:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.073 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.073 ************************************ 00:06:44.073 START TEST accel_decomp_mcore 00:06:44.073 ************************************ 00:06:44.073 13:10:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.073 13:10:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.073 13:10:58 -- accel/accel.sh@17 -- # local accel_module 00:06:44.073 13:10:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.073 13:10:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.073 13:10:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.073 13:10:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.073 13:10:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.073 13:10:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.073 13:10:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.073 13:10:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.073 13:10:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.073 13:10:58 -- accel/accel.sh@42 -- # jq -r . 00:06:44.073 [2024-12-16 13:10:58.345428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.073 [2024-12-16 13:10:58.345539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59492 ] 00:06:44.073 [2024-12-16 13:10:58.495566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.333 [2024-12-16 13:10:58.678510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.333 [2024-12-16 13:10:58.678818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.333 [2024-12-16 13:10:58.679233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.333 [2024-12-16 13:10:58.679462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.240 13:11:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:46.240 00:06:46.240 SPDK Configuration: 00:06:46.240 Core mask: 0xf 00:06:46.240 00:06:46.240 Accel Perf Configuration: 00:06:46.240 Workload Type: decompress 00:06:46.240 Transfer size: 4096 bytes 00:06:46.240 Vector count 1 00:06:46.240 Module: software 00:06:46.240 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.240 Queue depth: 32 00:06:46.240 Allocate depth: 32 00:06:46.240 # threads/core: 1 00:06:46.240 Run time: 1 seconds 00:06:46.240 Verify: Yes 00:06:46.240 00:06:46.240 Running for 1 seconds... 00:06:46.240 00:06:46.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.240 ------------------------------------------------------------------------------------ 00:06:46.240 0,0 57760/s 106 MiB/s 0 0 00:06:46.240 3,0 58240/s 107 MiB/s 0 0 00:06:46.240 2,0 58176/s 107 MiB/s 0 0 00:06:46.240 1,0 57728/s 106 MiB/s 0 0 00:06:46.240 ==================================================================================== 00:06:46.240 Total 231904/s 905 MiB/s 0 0' 00:06:46.240 13:11:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.240 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.240 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.240 13:11:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.240 13:11:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.240 13:11:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.240 13:11:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.240 13:11:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.240 13:11:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.240 13:11:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.240 13:11:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.240 13:11:00 -- accel/accel.sh@42 -- # jq -r . 00:06:46.240 [2024-12-16 13:11:00.432831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.240 [2024-12-16 13:11:00.433046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59521 ] 00:06:46.240 [2024-12-16 13:11:00.579009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.240 [2024-12-16 13:11:00.731225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.240 [2024-12-16 13:11:00.731426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.240 [2024-12-16 13:11:00.731727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.240 [2024-12-16 13:11:00.731760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.498 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.498 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.498 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.498 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.498 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.498 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.498 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.498 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.498 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.498 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.498 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.498 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.498 13:11:00 -- accel/accel.sh@21 -- # val=0xf 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=decompress 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=software 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=32 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=32 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=1 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val=Yes 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.499 13:11:00 -- accel/accel.sh@21 -- # val= 00:06:46.499 13:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.499 13:11:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@21 -- # val= 00:06:47.871 13:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.871 13:11:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.871 13:11:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.871 13:11:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:47.871 13:11:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.871 00:06:47.871 real 0m4.022s 00:06:47.871 user 0m12.128s 00:06:47.871 sys 0m0.268s 00:06:47.871 13:11:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.871 13:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.871 ************************************ 00:06:47.871 END TEST accel_decomp_mcore 00:06:47.871 ************************************ 00:06:47.871 13:11:02 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:47.871 13:11:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:47.871 13:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.871 13:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.872 ************************************ 00:06:47.872 START TEST accel_decomp_full_mcore 00:06:47.872 ************************************ 00:06:47.872 13:11:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:47.872 13:11:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.872 13:11:02 -- accel/accel.sh@17 -- # local accel_module 00:06:47.872 13:11:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:47.872 13:11:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:47.872 13:11:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.872 13:11:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.872 13:11:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.872 13:11:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.872 13:11:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.872 13:11:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.872 13:11:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.872 13:11:02 -- accel/accel.sh@42 -- # jq -r . 00:06:47.872 [2024-12-16 13:11:02.423544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.872 [2024-12-16 13:11:02.423659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:06:48.172 [2024-12-16 13:11:02.570206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.172 [2024-12-16 13:11:02.711125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.172 [2024-12-16 13:11:02.711356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.172 [2024-12-16 13:11:02.711495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.432 [2024-12-16 13:11:02.711590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.808 13:11:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:49.808 00:06:49.808 SPDK Configuration: 00:06:49.808 Core mask: 0xf 00:06:49.808 00:06:49.808 Accel Perf Configuration: 00:06:49.808 Workload Type: decompress 00:06:49.808 Transfer size: 111250 bytes 00:06:49.808 Vector count 1 00:06:49.808 Module: software 00:06:49.808 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.808 Queue depth: 32 00:06:49.808 Allocate depth: 32 00:06:49.808 # threads/core: 1 00:06:49.808 Run time: 1 seconds 00:06:49.808 Verify: Yes 00:06:49.808 00:06:49.808 Running for 1 seconds... 00:06:49.808 00:06:49.808 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.808 ------------------------------------------------------------------------------------ 00:06:49.808 0,0 5632/s 232 MiB/s 0 0 00:06:49.808 3,0 4320/s 178 MiB/s 0 0 00:06:49.808 2,0 4320/s 178 MiB/s 0 0 00:06:49.808 1,0 4320/s 178 MiB/s 0 0 00:06:49.808 ==================================================================================== 00:06:49.808 Total 18592/s 1972 MiB/s 0 0' 00:06:49.808 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.808 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.808 13:11:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.808 13:11:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.808 13:11:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.808 13:11:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.808 13:11:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.808 13:11:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.808 13:11:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.808 13:11:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.808 13:11:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.808 13:11:04 -- accel/accel.sh@42 -- # jq -r . 00:06:49.808 [2024-12-16 13:11:04.369403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.808 [2024-12-16 13:11:04.369508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59594 ] 00:06:50.068 [2024-12-16 13:11:04.514296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.326 [2024-12-16 13:11:04.657846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.326 [2024-12-16 13:11:04.658140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.326 [2024-12-16 13:11:04.658389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.326 [2024-12-16 13:11:04.658396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=0xf 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=decompress 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=software 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=32 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=32 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=1 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val=Yes 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.326 13:11:04 -- accel/accel.sh@21 -- # val= 00:06:50.326 13:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.326 13:11:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.700 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.700 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.700 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.700 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.700 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.700 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.700 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.700 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.700 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.700 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.701 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.701 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.701 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.701 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.701 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.701 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.701 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.701 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.701 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.701 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.701 13:11:06 -- accel/accel.sh@21 -- # val= 00:06:51.701 13:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.701 13:11:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.959 13:11:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.959 13:11:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.959 13:11:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.959 00:06:51.959 real 0m3.885s 00:06:51.959 user 0m11.817s 00:06:51.959 sys 0m0.267s 00:06:51.959 13:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.959 13:11:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 ************************************ 00:06:51.959 END TEST accel_decomp_full_mcore 00:06:51.959 ************************************ 00:06:51.959 13:11:06 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.959 13:11:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:51.959 13:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.959 13:11:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 ************************************ 00:06:51.959 START TEST accel_decomp_mthread 00:06:51.959 ************************************ 00:06:51.959 13:11:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.959 13:11:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.959 13:11:06 -- accel/accel.sh@17 -- # local accel_module 00:06:51.959 13:11:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.959 13:11:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.959 13:11:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.959 13:11:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.959 13:11:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.959 13:11:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.959 13:11:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.959 13:11:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.959 13:11:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.959 13:11:06 -- accel/accel.sh@42 -- # jq -r . 00:06:51.959 [2024-12-16 13:11:06.351107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.959 [2024-12-16 13:11:06.351187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59638 ] 00:06:51.960 [2024-12-16 13:11:06.493166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.218 [2024-12-16 13:11:06.634380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.118 13:11:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.118 00:06:54.118 SPDK Configuration: 00:06:54.118 Core mask: 0x1 00:06:54.118 00:06:54.118 Accel Perf Configuration: 00:06:54.118 Workload Type: decompress 00:06:54.118 Transfer size: 4096 bytes 00:06:54.118 Vector count 1 00:06:54.118 Module: software 00:06:54.118 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.118 Queue depth: 32 00:06:54.118 Allocate depth: 32 00:06:54.118 # threads/core: 2 00:06:54.118 Run time: 1 seconds 00:06:54.118 Verify: Yes 00:06:54.118 00:06:54.118 Running for 1 seconds... 00:06:54.118 00:06:54.118 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.118 ------------------------------------------------------------------------------------ 00:06:54.118 0,1 41120/s 75 MiB/s 0 0 00:06:54.118 0,0 41056/s 75 MiB/s 0 0 00:06:54.118 ==================================================================================== 00:06:54.118 Total 82176/s 321 MiB/s 0 0' 00:06:54.118 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.118 13:11:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.118 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.118 13:11:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.118 13:11:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.118 13:11:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.118 13:11:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.118 13:11:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.118 13:11:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.118 13:11:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.118 13:11:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.118 13:11:08 -- accel/accel.sh@42 -- # jq -r . 00:06:54.118 [2024-12-16 13:11:08.248601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.118 [2024-12-16 13:11:08.248757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59659 ] 00:06:54.118 [2024-12-16 13:11:08.385769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.118 [2024-12-16 13:11:08.558217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=0x1 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=decompress 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=software 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=32 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=32 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=2 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val=Yes 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.380 13:11:08 -- accel/accel.sh@21 -- # val= 00:06:54.380 13:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.380 13:11:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@21 -- # val= 00:06:55.829 13:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # IFS=: 00:06:55.829 13:11:10 -- accel/accel.sh@20 -- # read -r var val 00:06:55.829 13:11:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.829 13:11:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:55.829 13:11:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.829 00:06:55.829 real 0m3.883s 00:06:55.829 user 0m3.443s 00:06:55.829 sys 0m0.242s 00:06:55.829 13:11:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.829 13:11:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.829 ************************************ 00:06:55.829 END TEST accel_decomp_mthread 00:06:55.829 ************************************ 00:06:55.829 13:11:10 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.829 13:11:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:55.829 13:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.829 13:11:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.829 ************************************ 00:06:55.829 START TEST accel_deomp_full_mthread 00:06:55.829 ************************************ 00:06:55.829 13:11:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.829 13:11:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.829 13:11:10 -- accel/accel.sh@17 -- # local accel_module 00:06:55.829 13:11:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.829 13:11:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.829 13:11:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.829 13:11:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.829 13:11:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.829 13:11:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.829 13:11:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.829 13:11:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.829 13:11:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.829 13:11:10 -- accel/accel.sh@42 -- # jq -r . 00:06:55.829 [2024-12-16 13:11:10.291839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.829 [2024-12-16 13:11:10.292049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59700 ] 00:06:56.091 [2024-12-16 13:11:10.440404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.091 [2024-12-16 13:11:10.616163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.000 13:11:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.000 00:06:58.000 SPDK Configuration: 00:06:58.000 Core mask: 0x1 00:06:58.000 00:06:58.000 Accel Perf Configuration: 00:06:58.000 Workload Type: decompress 00:06:58.000 Transfer size: 111250 bytes 00:06:58.000 Vector count 1 00:06:58.000 Module: software 00:06:58.000 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.000 Queue depth: 32 00:06:58.001 Allocate depth: 32 00:06:58.001 # threads/core: 2 00:06:58.001 Run time: 1 seconds 00:06:58.001 Verify: Yes 00:06:58.001 00:06:58.001 Running for 1 seconds... 00:06:58.001 00:06:58.001 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.001 ------------------------------------------------------------------------------------ 00:06:58.001 0,1 2240/s 92 MiB/s 0 0 00:06:58.001 0,0 2176/s 89 MiB/s 0 0 00:06:58.001 ==================================================================================== 00:06:58.001 Total 4416/s 468 MiB/s 0 0' 00:06:58.001 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.001 13:11:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.001 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.001 13:11:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.001 13:11:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.001 13:11:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.001 13:11:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.001 13:11:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.001 13:11:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.001 13:11:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.001 13:11:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.001 13:11:12 -- accel/accel.sh@42 -- # jq -r . 00:06:58.001 [2024-12-16 13:11:12.348688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.001 [2024-12-16 13:11:12.348784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:06:58.001 [2024-12-16 13:11:12.493976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.259 [2024-12-16 13:11:12.632938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=0x1 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=decompress 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=software 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=32 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=32 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=2 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val=Yes 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.259 13:11:12 -- accel/accel.sh@21 -- # val= 00:06:58.259 13:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.259 13:11:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@21 -- # val= 00:07:00.160 13:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.160 13:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.160 13:11:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.160 13:11:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.160 ************************************ 00:07:00.160 END TEST accel_deomp_full_mthread 00:07:00.160 ************************************ 00:07:00.160 13:11:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.160 00:07:00.160 real 0m3.989s 00:07:00.160 user 0m3.541s 00:07:00.160 sys 0m0.236s 00:07:00.160 13:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.160 13:11:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.160 13:11:14 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:00.160 13:11:14 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:00.160 13:11:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.160 13:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.160 13:11:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.160 13:11:14 -- accel/accel.sh@129 -- # build_accel_config 00:07:00.160 13:11:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.160 13:11:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.160 13:11:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.160 13:11:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.160 13:11:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.160 13:11:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.160 13:11:14 -- accel/accel.sh@42 -- # jq -r . 00:07:00.160 ************************************ 00:07:00.160 START TEST accel_dif_functional_tests 00:07:00.160 ************************************ 00:07:00.160 13:11:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:00.160 [2024-12-16 13:11:14.356315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.160 [2024-12-16 13:11:14.356517] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59768 ] 00:07:00.160 [2024-12-16 13:11:14.503171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.160 [2024-12-16 13:11:14.645811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.160 [2024-12-16 13:11:14.646031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.160 [2024-12-16 13:11:14.646044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.418 00:07:00.418 00:07:00.418 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.418 http://cunit.sourceforge.net/ 00:07:00.418 00:07:00.418 00:07:00.418 Suite: accel_dif 00:07:00.418 Test: verify: DIF generated, GUARD check ...passed 00:07:00.418 Test: verify: DIF generated, APPTAG check ...passed 00:07:00.418 Test: verify: DIF generated, REFTAG check ...passed 00:07:00.418 Test: verify: DIF not generated, GUARD check ...passed 00:07:00.418 Test: verify: DIF not generated, APPTAG check ...[2024-12-16 13:11:14.818898] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:00.418 [2024-12-16 13:11:14.819217] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:00.418 passed 00:07:00.418 Test: verify: DIF not generated, REFTAG check ...passed 00:07:00.418 Test: verify: APPTAG correct, APPTAG check ...[2024-12-16 13:11:14.819303] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:00.418 [2024-12-16 13:11:14.819453] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:00.418 [2024-12-16 13:11:14.819503] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:00.418 [2024-12-16 13:11:14.819595] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:00.418 passed 00:07:00.418 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:00.418 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:00.418 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:00.418 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:00.418 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-16 13:11:14.819719] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:00.418 [2024-12-16 13:11:14.819911] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:00.418 passed 00:07:00.418 Test: generate copy: DIF generated, GUARD check ...passed 00:07:00.418 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:00.418 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:00.418 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:00.418 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:00.418 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:00.418 Test: generate copy: iovecs-len validate ...[2024-12-16 13:11:14.820484] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:00.418 passed 00:07:00.418 Test: generate copy: buffer alignment validate ...passed 00:07:00.418 00:07:00.418 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.418 suites 1 1 n/a 0 0 00:07:00.418 tests 20 20 20 0 0 00:07:00.418 asserts 204 204 204 0 n/a 00:07:00.418 00:07:00.418 Elapsed time = 0.004 seconds 00:07:00.988 00:07:00.988 real 0m1.124s 00:07:00.988 user 0m1.996s 00:07:00.989 sys 0m0.153s 00:07:00.989 13:11:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.989 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.989 ************************************ 00:07:00.989 END TEST accel_dif_functional_tests 00:07:00.989 ************************************ 00:07:00.989 00:07:00.989 real 1m24.362s 00:07:00.989 user 1m32.050s 00:07:00.989 sys 0m6.117s 00:07:00.989 ************************************ 00:07:00.989 END TEST accel 00:07:00.989 ************************************ 00:07:00.989 13:11:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.989 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.989 13:11:15 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:00.989 13:11:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.989 13:11:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.989 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.989 ************************************ 00:07:00.989 START TEST accel_rpc 00:07:00.989 ************************************ 00:07:00.989 13:11:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:01.250 * Looking for test storage... 00:07:01.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:01.250 13:11:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:01.250 13:11:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.250 13:11:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:01.250 13:11:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.250 13:11:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:01.250 13:11:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:01.250 13:11:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:01.250 13:11:15 -- scripts/common.sh@335 -- # IFS=.-: 00:07:01.250 13:11:15 -- scripts/common.sh@335 -- # read -ra ver1 00:07:01.250 13:11:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.250 13:11:15 -- scripts/common.sh@336 -- # read -ra ver2 00:07:01.250 13:11:15 -- scripts/common.sh@337 -- # local 'op=<' 00:07:01.250 13:11:15 -- scripts/common.sh@339 -- # ver1_l=2 00:07:01.250 13:11:15 -- scripts/common.sh@340 -- # ver2_l=1 00:07:01.250 13:11:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:01.250 13:11:15 -- scripts/common.sh@343 -- # case "$op" in 00:07:01.250 13:11:15 -- scripts/common.sh@344 -- # : 1 00:07:01.250 13:11:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:01.250 13:11:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.250 13:11:15 -- scripts/common.sh@364 -- # decimal 1 00:07:01.250 13:11:15 -- scripts/common.sh@352 -- # local d=1 00:07:01.250 13:11:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.250 13:11:15 -- scripts/common.sh@354 -- # echo 1 00:07:01.250 13:11:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:01.250 13:11:15 -- scripts/common.sh@365 -- # decimal 2 00:07:01.250 13:11:15 -- scripts/common.sh@352 -- # local d=2 00:07:01.250 13:11:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.250 13:11:15 -- scripts/common.sh@354 -- # echo 2 00:07:01.250 13:11:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:01.250 13:11:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:01.250 13:11:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:01.250 13:11:15 -- scripts/common.sh@367 -- # return 0 00:07:01.250 13:11:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.250 13:11:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.250 --rc genhtml_branch_coverage=1 00:07:01.250 --rc genhtml_function_coverage=1 00:07:01.250 --rc genhtml_legend=1 00:07:01.250 --rc geninfo_all_blocks=1 00:07:01.250 --rc geninfo_unexecuted_blocks=1 00:07:01.250 00:07:01.250 ' 00:07:01.250 13:11:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.250 --rc genhtml_branch_coverage=1 00:07:01.250 --rc genhtml_function_coverage=1 00:07:01.250 --rc genhtml_legend=1 00:07:01.250 --rc geninfo_all_blocks=1 00:07:01.250 --rc geninfo_unexecuted_blocks=1 00:07:01.250 00:07:01.250 ' 00:07:01.250 13:11:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.250 --rc genhtml_branch_coverage=1 00:07:01.250 --rc genhtml_function_coverage=1 00:07:01.250 --rc genhtml_legend=1 00:07:01.250 --rc geninfo_all_blocks=1 00:07:01.250 --rc geninfo_unexecuted_blocks=1 00:07:01.250 00:07:01.250 ' 00:07:01.250 13:11:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.250 --rc genhtml_branch_coverage=1 00:07:01.250 --rc genhtml_function_coverage=1 00:07:01.250 --rc genhtml_legend=1 00:07:01.250 --rc geninfo_all_blocks=1 00:07:01.250 --rc geninfo_unexecuted_blocks=1 00:07:01.250 00:07:01.250 ' 00:07:01.250 13:11:15 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.250 13:11:15 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59851 00:07:01.250 13:11:15 -- accel/accel_rpc.sh@15 -- # waitforlisten 59851 00:07:01.250 13:11:15 -- common/autotest_common.sh@829 -- # '[' -z 59851 ']' 00:07:01.250 13:11:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.250 13:11:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.250 13:11:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.250 13:11:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.250 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.250 13:11:15 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:01.250 [2024-12-16 13:11:15.718658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.250 [2024-12-16 13:11:15.718771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59851 ] 00:07:01.511 [2024-12-16 13:11:15.864980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.511 [2024-12-16 13:11:16.040495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.511 [2024-12-16 13:11:16.040719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.083 13:11:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.083 13:11:16 -- common/autotest_common.sh@862 -- # return 0 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:02.083 13:11:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.083 13:11:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.083 13:11:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.083 ************************************ 00:07:02.083 START TEST accel_assign_opcode 00:07:02.083 ************************************ 00:07:02.083 13:11:16 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:02.083 13:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.083 13:11:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.083 [2024-12-16 13:11:16.533334] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:02.083 13:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:02.083 13:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.083 13:11:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.083 [2024-12-16 13:11:16.541295] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:02.083 13:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.083 13:11:16 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:02.083 13:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.083 13:11:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.657 13:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.657 13:11:17 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:02.657 13:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.657 13:11:17 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:02.658 13:11:17 -- common/autotest_common.sh@10 -- # set +x 00:07:02.658 13:11:17 -- accel/accel_rpc.sh@42 -- # grep software 00:07:02.658 13:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.658 software 00:07:02.658 00:07:02.658 real 0m0.584s 00:07:02.658 user 0m0.037s 00:07:02.658 sys 0m0.006s 00:07:02.658 13:11:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.658 ************************************ 00:07:02.658 END TEST accel_assign_opcode 00:07:02.658 ************************************ 00:07:02.658 13:11:17 -- common/autotest_common.sh@10 -- # set +x 00:07:02.658 13:11:17 -- accel/accel_rpc.sh@55 -- # killprocess 59851 00:07:02.658 13:11:17 -- common/autotest_common.sh@936 -- # '[' -z 59851 ']' 00:07:02.658 13:11:17 -- common/autotest_common.sh@940 -- # kill -0 59851 00:07:02.658 13:11:17 -- common/autotest_common.sh@941 -- # uname 00:07:02.658 13:11:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.658 13:11:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59851 00:07:02.658 killing process with pid 59851 00:07:02.658 13:11:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.658 13:11:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.658 13:11:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59851' 00:07:02.658 13:11:17 -- common/autotest_common.sh@955 -- # kill 59851 00:07:02.658 13:11:17 -- common/autotest_common.sh@960 -- # wait 59851 00:07:04.571 00:07:04.571 real 0m3.167s 00:07:04.571 user 0m3.130s 00:07:04.571 sys 0m0.392s 00:07:04.571 ************************************ 00:07:04.571 END TEST accel_rpc 00:07:04.571 ************************************ 00:07:04.571 13:11:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.571 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.571 13:11:18 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:04.571 13:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.571 13:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.571 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.571 ************************************ 00:07:04.571 START TEST app_cmdline 00:07:04.571 ************************************ 00:07:04.571 13:11:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:04.571 * Looking for test storage... 00:07:04.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:04.571 13:11:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.571 13:11:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.571 13:11:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.571 13:11:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.571 13:11:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.571 13:11:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.571 13:11:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.571 13:11:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.571 13:11:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.571 13:11:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.571 13:11:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.571 13:11:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.571 13:11:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.571 13:11:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.571 13:11:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.571 13:11:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.571 13:11:18 -- scripts/common.sh@344 -- # : 1 00:07:04.571 13:11:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.571 13:11:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.571 13:11:18 -- scripts/common.sh@364 -- # decimal 1 00:07:04.571 13:11:18 -- scripts/common.sh@352 -- # local d=1 00:07:04.571 13:11:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.571 13:11:18 -- scripts/common.sh@354 -- # echo 1 00:07:04.571 13:11:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.571 13:11:18 -- scripts/common.sh@365 -- # decimal 2 00:07:04.571 13:11:18 -- scripts/common.sh@352 -- # local d=2 00:07:04.571 13:11:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.571 13:11:18 -- scripts/common.sh@354 -- # echo 2 00:07:04.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.571 13:11:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.571 13:11:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.571 13:11:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.571 13:11:18 -- scripts/common.sh@367 -- # return 0 00:07:04.571 13:11:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.571 13:11:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.571 --rc genhtml_branch_coverage=1 00:07:04.571 --rc genhtml_function_coverage=1 00:07:04.571 --rc genhtml_legend=1 00:07:04.571 --rc geninfo_all_blocks=1 00:07:04.571 --rc geninfo_unexecuted_blocks=1 00:07:04.571 00:07:04.571 ' 00:07:04.571 13:11:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.571 --rc genhtml_branch_coverage=1 00:07:04.571 --rc genhtml_function_coverage=1 00:07:04.571 --rc genhtml_legend=1 00:07:04.571 --rc geninfo_all_blocks=1 00:07:04.571 --rc geninfo_unexecuted_blocks=1 00:07:04.571 00:07:04.571 ' 00:07:04.571 13:11:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.571 --rc genhtml_branch_coverage=1 00:07:04.571 --rc genhtml_function_coverage=1 00:07:04.571 --rc genhtml_legend=1 00:07:04.571 --rc geninfo_all_blocks=1 00:07:04.571 --rc geninfo_unexecuted_blocks=1 00:07:04.571 00:07:04.571 ' 00:07:04.571 13:11:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.571 --rc genhtml_branch_coverage=1 00:07:04.571 --rc genhtml_function_coverage=1 00:07:04.571 --rc genhtml_legend=1 00:07:04.571 --rc geninfo_all_blocks=1 00:07:04.571 --rc geninfo_unexecuted_blocks=1 00:07:04.571 00:07:04.571 ' 00:07:04.571 13:11:18 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.571 13:11:18 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59963 00:07:04.571 13:11:18 -- app/cmdline.sh@18 -- # waitforlisten 59963 00:07:04.571 13:11:18 -- common/autotest_common.sh@829 -- # '[' -z 59963 ']' 00:07:04.571 13:11:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.571 13:11:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.571 13:11:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.571 13:11:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.571 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.571 13:11:18 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.571 [2024-12-16 13:11:18.955345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.571 [2024-12-16 13:11:18.955641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:07:04.571 [2024-12-16 13:11:19.105776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.829 [2024-12-16 13:11:19.255793] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.829 [2024-12-16 13:11:19.255966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.394 13:11:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.394 13:11:19 -- common/autotest_common.sh@862 -- # return 0 00:07:05.394 13:11:19 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:05.394 { 00:07:05.394 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:05.394 "fields": { 00:07:05.394 "major": 24, 00:07:05.394 "minor": 1, 00:07:05.394 "patch": 1, 00:07:05.394 "suffix": "-pre", 00:07:05.394 "commit": "c13c99a5e" 00:07:05.394 } 00:07:05.394 } 00:07:05.394 13:11:19 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.394 13:11:19 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.394 13:11:19 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.394 13:11:19 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.394 13:11:19 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.394 13:11:19 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.394 13:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.394 13:11:19 -- app/cmdline.sh@26 -- # sort 00:07:05.394 13:11:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.652 13:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.652 13:11:19 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.652 13:11:19 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.652 13:11:19 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.653 13:11:19 -- common/autotest_common.sh@650 -- # local es=0 00:07:05.653 13:11:19 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.653 13:11:19 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.653 13:11:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.653 13:11:19 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.653 13:11:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.653 13:11:19 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.653 13:11:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.653 13:11:19 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.653 13:11:19 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:05.653 13:11:19 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.653 request: 00:07:05.653 { 00:07:05.653 "method": "env_dpdk_get_mem_stats", 00:07:05.653 "req_id": 1 00:07:05.653 } 00:07:05.653 Got JSON-RPC error response 00:07:05.653 response: 00:07:05.653 { 00:07:05.653 "code": -32601, 00:07:05.653 "message": "Method not found" 00:07:05.653 } 00:07:05.653 13:11:20 -- common/autotest_common.sh@653 -- # es=1 00:07:05.653 13:11:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.653 13:11:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.653 13:11:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.653 13:11:20 -- app/cmdline.sh@1 -- # killprocess 59963 00:07:05.653 13:11:20 -- common/autotest_common.sh@936 -- # '[' -z 59963 ']' 00:07:05.653 13:11:20 -- common/autotest_common.sh@940 -- # kill -0 59963 00:07:05.653 13:11:20 -- common/autotest_common.sh@941 -- # uname 00:07:05.653 13:11:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.653 13:11:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59963 00:07:05.653 13:11:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.653 13:11:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.653 killing process with pid 59963 00:07:05.653 13:11:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59963' 00:07:05.653 13:11:20 -- common/autotest_common.sh@955 -- # kill 59963 00:07:05.653 13:11:20 -- common/autotest_common.sh@960 -- # wait 59963 00:07:07.029 00:07:07.029 real 0m2.663s 00:07:07.029 user 0m2.932s 00:07:07.029 sys 0m0.421s 00:07:07.029 ************************************ 00:07:07.029 13:11:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.029 13:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.029 END TEST app_cmdline 00:07:07.029 ************************************ 00:07:07.029 13:11:21 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:07.029 13:11:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.029 13:11:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.029 13:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.029 ************************************ 00:07:07.029 START TEST version 00:07:07.029 ************************************ 00:07:07.029 13:11:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:07.029 * Looking for test storage... 00:07:07.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:07.029 13:11:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:07.029 13:11:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:07.029 13:11:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:07.291 13:11:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:07.291 13:11:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:07.291 13:11:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:07.291 13:11:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:07.291 13:11:21 -- scripts/common.sh@335 -- # IFS=.-: 00:07:07.291 13:11:21 -- scripts/common.sh@335 -- # read -ra ver1 00:07:07.291 13:11:21 -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.291 13:11:21 -- scripts/common.sh@336 -- # read -ra ver2 00:07:07.291 13:11:21 -- scripts/common.sh@337 -- # local 'op=<' 00:07:07.291 13:11:21 -- scripts/common.sh@339 -- # ver1_l=2 00:07:07.291 13:11:21 -- scripts/common.sh@340 -- # ver2_l=1 00:07:07.291 13:11:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:07.291 13:11:21 -- scripts/common.sh@343 -- # case "$op" in 00:07:07.291 13:11:21 -- scripts/common.sh@344 -- # : 1 00:07:07.291 13:11:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:07.291 13:11:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.291 13:11:21 -- scripts/common.sh@364 -- # decimal 1 00:07:07.291 13:11:21 -- scripts/common.sh@352 -- # local d=1 00:07:07.291 13:11:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.291 13:11:21 -- scripts/common.sh@354 -- # echo 1 00:07:07.291 13:11:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:07.291 13:11:21 -- scripts/common.sh@365 -- # decimal 2 00:07:07.291 13:11:21 -- scripts/common.sh@352 -- # local d=2 00:07:07.291 13:11:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.291 13:11:21 -- scripts/common.sh@354 -- # echo 2 00:07:07.291 13:11:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:07.291 13:11:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:07.291 13:11:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:07.291 13:11:21 -- scripts/common.sh@367 -- # return 0 00:07:07.291 13:11:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.291 13:11:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 13:11:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 13:11:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 13:11:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 13:11:21 -- app/version.sh@17 -- # get_header_version major 00:07:07.291 13:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.291 13:11:21 -- app/version.sh@14 -- # cut -f2 00:07:07.291 13:11:21 -- app/version.sh@14 -- # tr -d '"' 00:07:07.291 13:11:21 -- app/version.sh@17 -- # major=24 00:07:07.291 13:11:21 -- app/version.sh@18 -- # get_header_version minor 00:07:07.291 13:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.291 13:11:21 -- app/version.sh@14 -- # tr -d '"' 00:07:07.291 13:11:21 -- app/version.sh@14 -- # cut -f2 00:07:07.291 13:11:21 -- app/version.sh@18 -- # minor=1 00:07:07.291 13:11:21 -- app/version.sh@19 -- # get_header_version patch 00:07:07.291 13:11:21 -- app/version.sh@14 -- # tr -d '"' 00:07:07.291 13:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.291 13:11:21 -- app/version.sh@14 -- # cut -f2 00:07:07.291 13:11:21 -- app/version.sh@19 -- # patch=1 00:07:07.291 13:11:21 -- app/version.sh@20 -- # get_header_version suffix 00:07:07.291 13:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.291 13:11:21 -- app/version.sh@14 -- # cut -f2 00:07:07.291 13:11:21 -- app/version.sh@14 -- # tr -d '"' 00:07:07.291 13:11:21 -- app/version.sh@20 -- # suffix=-pre 00:07:07.291 13:11:21 -- app/version.sh@22 -- # version=24.1 00:07:07.291 13:11:21 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.291 13:11:21 -- app/version.sh@25 -- # version=24.1.1 00:07:07.291 13:11:21 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:07.291 13:11:21 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:07.291 13:11:21 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.291 13:11:21 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:07.291 13:11:21 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:07.291 00:07:07.291 real 0m0.198s 00:07:07.291 user 0m0.121s 00:07:07.291 sys 0m0.100s 00:07:07.291 13:11:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.291 ************************************ 00:07:07.291 END TEST version 00:07:07.291 13:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.291 ************************************ 00:07:07.291 13:11:21 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:07.291 13:11:21 -- spdk/autotest.sh@191 -- # uname -s 00:07:07.291 13:11:21 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:07.291 13:11:21 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:07.291 13:11:21 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:07.291 13:11:21 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:07.291 13:11:21 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:07.291 13:11:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:07.291 13:11:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.291 13:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.291 ************************************ 00:07:07.291 START TEST blockdev_nvme 00:07:07.291 ************************************ 00:07:07.291 13:11:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:07.291 * Looking for test storage... 00:07:07.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:07.291 13:11:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:07.291 13:11:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:07.291 13:11:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:07.551 13:11:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:07.551 13:11:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:07.551 13:11:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:07.551 13:11:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:07.551 13:11:21 -- scripts/common.sh@335 -- # IFS=.-: 00:07:07.551 13:11:21 -- scripts/common.sh@335 -- # read -ra ver1 00:07:07.551 13:11:21 -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.551 13:11:21 -- scripts/common.sh@336 -- # read -ra ver2 00:07:07.551 13:11:21 -- scripts/common.sh@337 -- # local 'op=<' 00:07:07.551 13:11:21 -- scripts/common.sh@339 -- # ver1_l=2 00:07:07.551 13:11:21 -- scripts/common.sh@340 -- # ver2_l=1 00:07:07.551 13:11:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:07.551 13:11:21 -- scripts/common.sh@343 -- # case "$op" in 00:07:07.551 13:11:21 -- scripts/common.sh@344 -- # : 1 00:07:07.551 13:11:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:07.551 13:11:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.551 13:11:21 -- scripts/common.sh@364 -- # decimal 1 00:07:07.551 13:11:21 -- scripts/common.sh@352 -- # local d=1 00:07:07.551 13:11:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.551 13:11:21 -- scripts/common.sh@354 -- # echo 1 00:07:07.551 13:11:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:07.551 13:11:21 -- scripts/common.sh@365 -- # decimal 2 00:07:07.551 13:11:21 -- scripts/common.sh@352 -- # local d=2 00:07:07.551 13:11:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.551 13:11:21 -- scripts/common.sh@354 -- # echo 2 00:07:07.551 13:11:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:07.551 13:11:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:07.551 13:11:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:07.551 13:11:21 -- scripts/common.sh@367 -- # return 0 00:07:07.551 13:11:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.551 13:11:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:07.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.551 --rc genhtml_branch_coverage=1 00:07:07.551 --rc genhtml_function_coverage=1 00:07:07.551 --rc genhtml_legend=1 00:07:07.551 --rc geninfo_all_blocks=1 00:07:07.551 --rc geninfo_unexecuted_blocks=1 00:07:07.551 00:07:07.551 ' 00:07:07.551 13:11:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:07.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.551 --rc genhtml_branch_coverage=1 00:07:07.551 --rc genhtml_function_coverage=1 00:07:07.551 --rc genhtml_legend=1 00:07:07.551 --rc geninfo_all_blocks=1 00:07:07.551 --rc geninfo_unexecuted_blocks=1 00:07:07.551 00:07:07.551 ' 00:07:07.551 13:11:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:07.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.551 --rc genhtml_branch_coverage=1 00:07:07.551 --rc genhtml_function_coverage=1 00:07:07.551 --rc genhtml_legend=1 00:07:07.551 --rc geninfo_all_blocks=1 00:07:07.551 --rc geninfo_unexecuted_blocks=1 00:07:07.551 00:07:07.551 ' 00:07:07.551 13:11:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:07.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.551 --rc genhtml_branch_coverage=1 00:07:07.551 --rc genhtml_function_coverage=1 00:07:07.551 --rc genhtml_legend=1 00:07:07.551 --rc geninfo_all_blocks=1 00:07:07.551 --rc geninfo_unexecuted_blocks=1 00:07:07.551 00:07:07.551 ' 00:07:07.551 13:11:21 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:07.551 13:11:21 -- bdev/nbd_common.sh@6 -- # set -e 00:07:07.551 13:11:21 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:07.551 13:11:21 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.551 13:11:21 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:07.551 13:11:21 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:07.551 13:11:21 -- bdev/blockdev.sh@18 -- # : 00:07:07.551 13:11:21 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:07.551 13:11:21 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:07.551 13:11:21 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:07.551 13:11:21 -- bdev/blockdev.sh@672 -- # uname -s 00:07:07.551 13:11:21 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:07.551 13:11:21 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:07.551 13:11:21 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:07.551 13:11:21 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:07.551 13:11:21 -- bdev/blockdev.sh@682 -- # dek= 00:07:07.551 13:11:21 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:07.551 13:11:21 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:07.551 13:11:21 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:07.551 13:11:21 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:07.551 13:11:21 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:07.551 13:11:21 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:07.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.551 13:11:21 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60136 00:07:07.551 13:11:21 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:07.551 13:11:21 -- bdev/blockdev.sh@47 -- # waitforlisten 60136 00:07:07.551 13:11:21 -- common/autotest_common.sh@829 -- # '[' -z 60136 ']' 00:07:07.551 13:11:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.551 13:11:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.551 13:11:21 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:07.552 13:11:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.552 13:11:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.552 13:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.552 [2024-12-16 13:11:21.951707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.552 [2024-12-16 13:11:21.951812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:07:07.552 [2024-12-16 13:11:22.101086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.812 [2024-12-16 13:11:22.276658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.812 [2024-12-16 13:11:22.276845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.199 13:11:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.199 13:11:23 -- common/autotest_common.sh@862 -- # return 0 00:07:09.199 13:11:23 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:09.199 13:11:23 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:09.199 13:11:23 -- bdev/blockdev.sh@79 -- # local json 00:07:09.199 13:11:23 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:09.199 13:11:23 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:09.199 13:11:23 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:09.199 13:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.199 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.460 13:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.460 13:11:23 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:09.460 13:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.460 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.460 13:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.460 13:11:23 -- bdev/blockdev.sh@738 -- # cat 00:07:09.460 13:11:23 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:09.460 13:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.460 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.460 13:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.460 13:11:23 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:09.460 13:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.460 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.460 13:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.460 13:11:23 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:09.460 13:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.460 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.460 13:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.460 13:11:23 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:09.460 13:11:23 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:09.460 13:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.460 13:11:23 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:09.460 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.460 13:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.460 13:11:23 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:09.460 13:11:23 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:09.461 13:11:23 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "78eb9c5d-0ff3-4dc2-a885-30f218402cb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "78eb9c5d-0ff3-4dc2-a885-30f218402cb5",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c5df0a77-f62f-449f-80fd-63749f820911"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c5df0a77-f62f-449f-80fd-63749f820911",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bf0099d3-5ff9-4045-8a3b-abf290c2431c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf0099d3-5ff9-4045-8a3b-abf290c2431c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "81b17f76-2724-4486-82f2-e415e96300bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81b17f76-2724-4486-82f2-e415e96300bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ba35a5ad-76cc-4d92-8663-c8d6cec3047e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba35a5ad-76cc-4d92-8663-c8d6cec3047e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "738475b0-8f8b-4cc4-918a-2fa97e2e3e4b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "738475b0-8f8b-4cc4-918a-2fa97e2e3e4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:09.461 13:11:23 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:09.461 13:11:23 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:09.461 13:11:23 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:09.461 13:11:23 -- bdev/blockdev.sh@752 -- # killprocess 60136 00:07:09.461 13:11:23 -- common/autotest_common.sh@936 -- # '[' -z 60136 ']' 00:07:09.461 13:11:23 -- common/autotest_common.sh@940 -- # kill -0 60136 00:07:09.461 13:11:23 -- common/autotest_common.sh@941 -- # uname 00:07:09.461 13:11:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.461 13:11:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60136 00:07:09.461 killing process with pid 60136 00:07:09.461 13:11:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.461 13:11:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.461 13:11:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60136' 00:07:09.461 13:11:23 -- common/autotest_common.sh@955 -- # kill 60136 00:07:09.461 13:11:23 -- common/autotest_common.sh@960 -- # wait 60136 00:07:11.386 13:11:25 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:11.386 13:11:25 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.386 13:11:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.386 13:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.386 13:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.386 ************************************ 00:07:11.386 START TEST bdev_hello_world 00:07:11.386 ************************************ 00:07:11.386 13:11:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.386 [2024-12-16 13:11:25.675568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.386 [2024-12-16 13:11:25.675726] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:07:11.386 [2024-12-16 13:11:25.832760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.646 [2024-12-16 13:11:26.009132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.217 [2024-12-16 13:11:26.533255] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:12.217 [2024-12-16 13:11:26.533303] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:12.217 [2024-12-16 13:11:26.533323] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:12.217 [2024-12-16 13:11:26.535779] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:12.217 [2024-12-16 13:11:26.536441] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:12.217 [2024-12-16 13:11:26.536467] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:12.217 [2024-12-16 13:11:26.537052] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:12.217 00:07:12.217 [2024-12-16 13:11:26.537078] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.786 ************************************ 00:07:12.786 END TEST bdev_hello_world 00:07:12.786 ************************************ 00:07:12.786 00:07:12.786 real 0m1.642s 00:07:12.786 user 0m1.360s 00:07:12.786 sys 0m0.175s 00:07:12.786 13:11:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.786 13:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:12.786 13:11:27 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.786 13:11:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:12.786 13:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.786 13:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:12.786 ************************************ 00:07:12.786 START TEST bdev_bounds 00:07:12.786 ************************************ 00:07:12.786 13:11:27 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:12.786 Process bdevio pid: 60270 00:07:12.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.786 13:11:27 -- bdev/blockdev.sh@288 -- # bdevio_pid=60270 00:07:12.786 13:11:27 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.786 13:11:27 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60270' 00:07:12.786 13:11:27 -- bdev/blockdev.sh@291 -- # waitforlisten 60270 00:07:12.786 13:11:27 -- common/autotest_common.sh@829 -- # '[' -z 60270 ']' 00:07:12.786 13:11:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.786 13:11:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.786 13:11:27 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.786 13:11:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.786 13:11:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.786 13:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.047 [2024-12-16 13:11:27.360698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.048 [2024-12-16 13:11:27.361579] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60270 ] 00:07:13.048 [2024-12-16 13:11:27.510269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.309 [2024-12-16 13:11:27.690960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.309 [2024-12-16 13:11:27.691300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.309 [2024-12-16 13:11:27.691458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.695 13:11:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.696 13:11:28 -- common/autotest_common.sh@862 -- # return 0 00:07:14.696 13:11:28 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:14.696 I/O targets: 00:07:14.696 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:14.696 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:14.696 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.696 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.696 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.696 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:14.696 00:07:14.696 00:07:14.696 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.696 http://cunit.sourceforge.net/ 00:07:14.696 00:07:14.696 00:07:14.696 Suite: bdevio tests on: Nvme3n1 00:07:14.696 Test: blockdev write read block ...passed 00:07:14.696 Test: blockdev write zeroes read block ...passed 00:07:14.696 Test: blockdev write zeroes read no split ...passed 00:07:14.696 Test: blockdev write zeroes read split ...passed 00:07:14.696 Test: blockdev write zeroes read split partial ...passed 00:07:14.696 Test: blockdev reset ...[2024-12-16 13:11:29.016417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:14.696 [2024-12-16 13:11:29.018856] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.696 passed 00:07:14.696 Test: blockdev write read 8 blocks ...passed 00:07:14.696 Test: blockdev write read size > 128k ...passed 00:07:14.696 Test: blockdev write read invalid size ...passed 00:07:14.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.696 Test: blockdev write read max offset ...passed 00:07:14.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.696 Test: blockdev writev readv 8 blocks ...passed 00:07:14.696 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.696 Test: blockdev writev readv block ...passed 00:07:14.696 Test: blockdev writev readv size > 128k ...passed 00:07:14.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.696 Test: blockdev comparev and writev ...[2024-12-16 13:11:29.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x278e0e000 len:0x1000 00:07:14.696 [2024-12-16 13:11:29.025526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme passthru rw ...passed 00:07:14.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:11:29.026098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.696 [2024-12-16 13:11:29.026121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme admin passthru ...passed 00:07:14.696 Test: blockdev copy ...passed 00:07:14.696 Suite: bdevio tests on: Nvme2n3 00:07:14.696 Test: blockdev write read block ...passed 00:07:14.696 Test: blockdev write zeroes read block ...passed 00:07:14.696 Test: blockdev write zeroes read no split ...passed 00:07:14.696 Test: blockdev write zeroes read split ...passed 00:07:14.696 Test: blockdev write zeroes read split partial ...passed 00:07:14.696 Test: blockdev reset ...[2024-12-16 13:11:29.068673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:14.696 [2024-12-16 13:11:29.071181] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.696 passed 00:07:14.696 Test: blockdev write read 8 blocks ...passed 00:07:14.696 Test: blockdev write read size > 128k ...passed 00:07:14.696 Test: blockdev write read invalid size ...passed 00:07:14.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.696 Test: blockdev write read max offset ...passed 00:07:14.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.696 Test: blockdev writev readv 8 blocks ...passed 00:07:14.696 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.696 Test: blockdev writev readv block ...passed 00:07:14.696 Test: blockdev writev readv size > 128k ...passed 00:07:14.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.696 Test: blockdev comparev and writev ...[2024-12-16 13:11:29.077455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x278e0a000 len:0x1000 00:07:14.696 [2024-12-16 13:11:29.077491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme passthru rw ...passed 00:07:14.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:11:29.078128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.696 [2024-12-16 13:11:29.078149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme admin passthru ...passed 00:07:14.696 Test: blockdev copy ...passed 00:07:14.696 Suite: bdevio tests on: Nvme2n2 00:07:14.696 Test: blockdev write read block ...passed 00:07:14.696 Test: blockdev write zeroes read block ...passed 00:07:14.696 Test: blockdev write zeroes read no split ...passed 00:07:14.696 Test: blockdev write zeroes read split ...passed 00:07:14.696 Test: blockdev write zeroes read split partial ...passed 00:07:14.696 Test: blockdev reset ...[2024-12-16 13:11:29.120021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:14.696 [2024-12-16 13:11:29.122501] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.696 passed 00:07:14.696 Test: blockdev write read 8 blocks ...passed 00:07:14.696 Test: blockdev write read size > 128k ...passed 00:07:14.696 Test: blockdev write read invalid size ...passed 00:07:14.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.696 Test: blockdev write read max offset ...passed 00:07:14.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.696 Test: blockdev writev readv 8 blocks ...passed 00:07:14.696 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.696 Test: blockdev writev readv block ...passed 00:07:14.696 Test: blockdev writev readv size > 128k ...passed 00:07:14.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.696 Test: blockdev comparev and writev ...[2024-12-16 13:11:29.129053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x266006000 len:0x1000 00:07:14.696 [2024-12-16 13:11:29.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme passthru rw ...passed 00:07:14.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:11:29.129671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.696 [2024-12-16 13:11:29.129693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme admin passthru ...passed 00:07:14.696 Test: blockdev copy ...passed 00:07:14.696 Suite: bdevio tests on: Nvme2n1 00:07:14.696 Test: blockdev write read block ...passed 00:07:14.696 Test: blockdev write zeroes read block ...passed 00:07:14.696 Test: blockdev write zeroes read no split ...passed 00:07:14.696 Test: blockdev write zeroes read split ...passed 00:07:14.696 Test: blockdev write zeroes read split partial ...passed 00:07:14.696 Test: blockdev reset ...[2024-12-16 13:11:29.170927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:14.696 [2024-12-16 13:11:29.173399] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.696 passed 00:07:14.696 Test: blockdev write read 8 blocks ...passed 00:07:14.696 Test: blockdev write read size > 128k ...passed 00:07:14.696 Test: blockdev write read invalid size ...passed 00:07:14.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.696 Test: blockdev write read max offset ...passed 00:07:14.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.696 Test: blockdev writev readv 8 blocks ...passed 00:07:14.696 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.696 Test: blockdev writev readv block ...passed 00:07:14.696 Test: blockdev writev readv size > 128k ...passed 00:07:14.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.696 Test: blockdev comparev and writev ...[2024-12-16 13:11:29.179444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x266001000 len:0x1000 00:07:14.696 [2024-12-16 13:11:29.179488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme passthru rw ...passed 00:07:14.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:11:29.180045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.696 [2024-12-16 13:11:29.180071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.696 passed 00:07:14.696 Test: blockdev nvme admin passthru ...passed 00:07:14.696 Test: blockdev copy ...passed 00:07:14.696 Suite: bdevio tests on: Nvme1n1 00:07:14.696 Test: blockdev write read block ...passed 00:07:14.696 Test: blockdev write zeroes read block ...passed 00:07:14.696 Test: blockdev write zeroes read no split ...passed 00:07:14.696 Test: blockdev write zeroes read split ...passed 00:07:14.696 Test: blockdev write zeroes read split partial ...passed 00:07:14.696 Test: blockdev reset ...[2024-12-16 13:11:29.222015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:14.696 [2024-12-16 13:11:29.224306] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.696 passed 00:07:14.696 Test: blockdev write read 8 blocks ...passed 00:07:14.696 Test: blockdev write read size > 128k ...passed 00:07:14.696 Test: blockdev write read invalid size ...passed 00:07:14.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.696 Test: blockdev write read max offset ...passed 00:07:14.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.696 Test: blockdev writev readv 8 blocks ...passed 00:07:14.697 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.697 Test: blockdev writev readv block ...passed 00:07:14.697 Test: blockdev writev readv size > 128k ...passed 00:07:14.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.697 Test: blockdev comparev and writev ...[2024-12-16 13:11:29.230611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276806000 len:0x1000 00:07:14.697 [2024-12-16 13:11:29.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.697 passed 00:07:14.697 Test: blockdev nvme passthru rw ...passed 00:07:14.697 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.697 Test: blockdev nvme admin passthru ...[2024-12-16 13:11:29.231209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.697 [2024-12-16 13:11:29.231229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.697 passed 00:07:14.697 Test: blockdev copy ...passed 00:07:14.697 Suite: bdevio tests on: Nvme0n1 00:07:14.697 Test: blockdev write read block ...passed 00:07:14.697 Test: blockdev write zeroes read block ...passed 00:07:14.697 Test: blockdev write zeroes read no split ...passed 00:07:14.697 Test: blockdev write zeroes read split ...passed 00:07:14.955 Test: blockdev write zeroes read split partial ...passed 00:07:14.955 Test: blockdev reset ...[2024-12-16 13:11:29.275148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:14.955 [2024-12-16 13:11:29.277480] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.955 passed 00:07:14.955 Test: blockdev write read 8 blocks ...passed 00:07:14.955 Test: blockdev write read size > 128k ...passed 00:07:14.955 Test: blockdev write read invalid size ...passed 00:07:14.955 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.955 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.955 Test: blockdev write read max offset ...passed 00:07:14.955 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.955 Test: blockdev writev readv 8 blocks ...passed 00:07:14.955 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.955 Test: blockdev writev readv block ...passed 00:07:14.955 Test: blockdev writev readv size > 128k ...passed 00:07:14.955 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.955 Test: blockdev comparev and writev ...[2024-12-16 13:11:29.282753] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:14.955 separate metadata which is not supported yet. 00:07:14.955 passed 00:07:14.955 Test: blockdev nvme passthru rw ...passed 00:07:14.955 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.955 Test: blockdev nvme admin passthru ...[2024-12-16 13:11:29.283084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:14.955 [2024-12-16 13:11:29.283112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:14.955 passed 00:07:14.955 Test: blockdev copy ...passed 00:07:14.955 00:07:14.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.955 suites 6 6 n/a 0 0 00:07:14.955 tests 138 138 138 0 0 00:07:14.955 asserts 893 893 893 0 n/a 00:07:14.955 00:07:14.955 Elapsed time = 0.883 seconds 00:07:14.955 0 00:07:14.955 13:11:29 -- bdev/blockdev.sh@293 -- # killprocess 60270 00:07:14.955 13:11:29 -- common/autotest_common.sh@936 -- # '[' -z 60270 ']' 00:07:14.955 13:11:29 -- common/autotest_common.sh@940 -- # kill -0 60270 00:07:14.955 13:11:29 -- common/autotest_common.sh@941 -- # uname 00:07:14.955 13:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.955 13:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60270 00:07:14.955 13:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.955 13:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.956 13:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60270' 00:07:14.956 killing process with pid 60270 00:07:14.956 13:11:29 -- common/autotest_common.sh@955 -- # kill 60270 00:07:14.956 13:11:29 -- common/autotest_common.sh@960 -- # wait 60270 00:07:15.524 13:11:29 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:15.524 00:07:15.524 real 0m2.559s 00:07:15.524 user 0m6.744s 00:07:15.524 sys 0m0.294s 00:07:15.524 13:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.524 ************************************ 00:07:15.524 END TEST bdev_bounds 00:07:15.524 13:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.524 ************************************ 00:07:15.524 13:11:29 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:15.524 13:11:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:15.524 13:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.524 13:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.524 ************************************ 00:07:15.524 START TEST bdev_nbd 00:07:15.524 ************************************ 00:07:15.524 13:11:29 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:15.524 13:11:29 -- bdev/blockdev.sh@298 -- # uname -s 00:07:15.524 13:11:29 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:15.524 13:11:29 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.524 13:11:29 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:15.524 13:11:29 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.524 13:11:29 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:15.524 13:11:29 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:15.524 13:11:29 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:15.524 13:11:29 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:15.524 13:11:29 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:15.524 13:11:29 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:15.524 13:11:29 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.524 13:11:29 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:15.524 13:11:29 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.524 13:11:29 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:15.524 13:11:29 -- bdev/blockdev.sh@316 -- # nbd_pid=60326 00:07:15.524 13:11:29 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:15.524 13:11:29 -- bdev/blockdev.sh@318 -- # waitforlisten 60326 /var/tmp/spdk-nbd.sock 00:07:15.524 13:11:29 -- common/autotest_common.sh@829 -- # '[' -z 60326 ']' 00:07:15.524 13:11:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.524 13:11:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.524 13:11:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.524 13:11:29 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:15.524 13:11:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.524 13:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.524 [2024-12-16 13:11:29.994081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.524 [2024-12-16 13:11:29.994185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.785 [2024-12-16 13:11:30.153557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.785 [2024-12-16 13:11:30.336960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.170 13:11:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.170 13:11:31 -- common/autotest_common.sh@862 -- # return 0 00:07:17.170 13:11:31 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@24 -- # local i 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:17.170 13:11:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:17.170 13:11:31 -- common/autotest_common.sh@867 -- # local i 00:07:17.170 13:11:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.170 13:11:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.170 13:11:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:17.170 13:11:31 -- common/autotest_common.sh@871 -- # break 00:07:17.170 13:11:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.170 13:11:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.170 13:11:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.170 1+0 records in 00:07:17.170 1+0 records out 00:07:17.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421197 s, 9.7 MB/s 00:07:17.170 13:11:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.170 13:11:31 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.170 13:11:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.170 13:11:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.170 13:11:31 -- common/autotest_common.sh@887 -- # return 0 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.170 13:11:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:17.429 13:11:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:17.429 13:11:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:17.429 13:11:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:17.429 13:11:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:17.429 13:11:31 -- common/autotest_common.sh@867 -- # local i 00:07:17.429 13:11:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.429 13:11:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.429 13:11:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:17.429 13:11:31 -- common/autotest_common.sh@871 -- # break 00:07:17.429 13:11:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.429 13:11:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.429 13:11:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.429 1+0 records in 00:07:17.429 1+0 records out 00:07:17.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479733 s, 8.5 MB/s 00:07:17.429 13:11:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.429 13:11:31 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.429 13:11:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.429 13:11:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.429 13:11:31 -- common/autotest_common.sh@887 -- # return 0 00:07:17.429 13:11:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.429 13:11:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.429 13:11:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:17.688 13:11:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:17.688 13:11:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:17.688 13:11:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:17.688 13:11:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:17.688 13:11:32 -- common/autotest_common.sh@867 -- # local i 00:07:17.688 13:11:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.688 13:11:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.688 13:11:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:17.688 13:11:32 -- common/autotest_common.sh@871 -- # break 00:07:17.688 13:11:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.688 13:11:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.688 13:11:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.688 1+0 records in 00:07:17.688 1+0 records out 00:07:17.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498008 s, 8.2 MB/s 00:07:17.688 13:11:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.688 13:11:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.688 13:11:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.688 13:11:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.688 13:11:32 -- common/autotest_common.sh@887 -- # return 0 00:07:17.688 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.688 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.688 13:11:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:17.948 13:11:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:17.948 13:11:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:17.948 13:11:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:17.948 13:11:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:17.948 13:11:32 -- common/autotest_common.sh@867 -- # local i 00:07:17.948 13:11:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.948 13:11:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.948 13:11:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:17.948 13:11:32 -- common/autotest_common.sh@871 -- # break 00:07:17.948 13:11:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.948 13:11:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.948 13:11:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.948 1+0 records in 00:07:17.948 1+0 records out 00:07:17.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103445 s, 4.0 MB/s 00:07:17.948 13:11:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.948 13:11:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.948 13:11:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.948 13:11:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.948 13:11:32 -- common/autotest_common.sh@887 -- # return 0 00:07:17.948 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.948 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.948 13:11:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:18.209 13:11:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:18.209 13:11:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:18.209 13:11:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:18.209 13:11:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:18.209 13:11:32 -- common/autotest_common.sh@867 -- # local i 00:07:18.209 13:11:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:18.209 13:11:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:18.209 13:11:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:18.209 13:11:32 -- common/autotest_common.sh@871 -- # break 00:07:18.209 13:11:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:18.209 13:11:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:18.209 13:11:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.209 1+0 records in 00:07:18.209 1+0 records out 00:07:18.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151881 s, 2.7 MB/s 00:07:18.209 13:11:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.209 13:11:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:18.209 13:11:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.209 13:11:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:18.209 13:11:32 -- common/autotest_common.sh@887 -- # return 0 00:07:18.209 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.209 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:18.209 13:11:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:18.471 13:11:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:18.471 13:11:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:18.471 13:11:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:18.471 13:11:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:18.471 13:11:32 -- common/autotest_common.sh@867 -- # local i 00:07:18.471 13:11:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:18.471 13:11:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:18.471 13:11:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:18.471 13:11:32 -- common/autotest_common.sh@871 -- # break 00:07:18.471 13:11:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:18.471 13:11:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:18.471 13:11:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.471 1+0 records in 00:07:18.471 1+0 records out 00:07:18.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455158 s, 9.0 MB/s 00:07:18.471 13:11:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.471 13:11:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:18.471 13:11:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.471 13:11:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:18.471 13:11:32 -- common/autotest_common.sh@887 -- # return 0 00:07:18.471 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.471 13:11:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:18.471 13:11:32 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.471 13:11:33 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd0", 00:07:18.471 "bdev_name": "Nvme0n1" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd1", 00:07:18.471 "bdev_name": "Nvme1n1" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd2", 00:07:18.471 "bdev_name": "Nvme2n1" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd3", 00:07:18.471 "bdev_name": "Nvme2n2" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd4", 00:07:18.471 "bdev_name": "Nvme2n3" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd5", 00:07:18.471 "bdev_name": "Nvme3n1" 00:07:18.471 } 00:07:18.471 ]' 00:07:18.471 13:11:33 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:18.471 13:11:33 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd0", 00:07:18.471 "bdev_name": "Nvme0n1" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd1", 00:07:18.471 "bdev_name": "Nvme1n1" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd2", 00:07:18.471 "bdev_name": "Nvme2n1" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd3", 00:07:18.471 "bdev_name": "Nvme2n2" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd4", 00:07:18.471 "bdev_name": "Nvme2n3" 00:07:18.471 }, 00:07:18.471 { 00:07:18.471 "nbd_device": "/dev/nbd5", 00:07:18.471 "bdev_name": "Nvme3n1" 00:07:18.471 } 00:07:18.471 ]' 00:07:18.471 13:11:33 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@51 -- # local i 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@41 -- # break 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.732 13:11:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@41 -- # break 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.993 13:11:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@41 -- # break 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:19.254 13:11:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@41 -- # break 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.515 13:11:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@41 -- # break 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.515 13:11:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@41 -- # break 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.775 13:11:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@65 -- # true 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@122 -- # count=0 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@127 -- # return 0 00:07:20.034 13:11:34 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@12 -- # local i 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.034 13:11:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:20.291 /dev/nbd0 00:07:20.291 13:11:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.291 13:11:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.291 13:11:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:20.291 13:11:34 -- common/autotest_common.sh@867 -- # local i 00:07:20.291 13:11:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.291 13:11:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.291 13:11:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:20.291 13:11:34 -- common/autotest_common.sh@871 -- # break 00:07:20.291 13:11:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.291 13:11:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.291 13:11:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.291 1+0 records in 00:07:20.291 1+0 records out 00:07:20.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415815 s, 9.9 MB/s 00:07:20.291 13:11:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.291 13:11:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.291 13:11:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.291 13:11:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.291 13:11:34 -- common/autotest_common.sh@887 -- # return 0 00:07:20.291 13:11:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.291 13:11:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.291 13:11:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:20.291 /dev/nbd1 00:07:20.549 13:11:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.549 13:11:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.549 13:11:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:20.549 13:11:34 -- common/autotest_common.sh@867 -- # local i 00:07:20.550 13:11:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.550 13:11:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.550 13:11:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:20.550 13:11:34 -- common/autotest_common.sh@871 -- # break 00:07:20.550 13:11:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.550 13:11:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.550 13:11:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.550 1+0 records in 00:07:20.550 1+0 records out 00:07:20.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347711 s, 11.8 MB/s 00:07:20.550 13:11:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.550 13:11:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.550 13:11:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.550 13:11:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.550 13:11:34 -- common/autotest_common.sh@887 -- # return 0 00:07:20.550 13:11:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.550 13:11:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.550 13:11:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:20.550 /dev/nbd10 00:07:20.550 13:11:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:20.550 13:11:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:20.550 13:11:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:20.550 13:11:35 -- common/autotest_common.sh@867 -- # local i 00:07:20.550 13:11:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.550 13:11:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.550 13:11:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:20.550 13:11:35 -- common/autotest_common.sh@871 -- # break 00:07:20.550 13:11:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.550 13:11:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.550 13:11:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.550 1+0 records in 00:07:20.550 1+0 records out 00:07:20.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526737 s, 7.8 MB/s 00:07:20.550 13:11:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.550 13:11:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.550 13:11:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.550 13:11:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.550 13:11:35 -- common/autotest_common.sh@887 -- # return 0 00:07:20.550 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.550 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.550 13:11:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:20.808 /dev/nbd11 00:07:20.808 13:11:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:20.808 13:11:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:20.808 13:11:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:20.808 13:11:35 -- common/autotest_common.sh@867 -- # local i 00:07:20.808 13:11:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.808 13:11:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.808 13:11:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:20.808 13:11:35 -- common/autotest_common.sh@871 -- # break 00:07:20.808 13:11:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.809 13:11:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.809 13:11:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.809 1+0 records in 00:07:20.809 1+0 records out 00:07:20.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082748 s, 4.9 MB/s 00:07:20.809 13:11:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.809 13:11:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.809 13:11:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.809 13:11:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.809 13:11:35 -- common/autotest_common.sh@887 -- # return 0 00:07:20.809 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.809 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.809 13:11:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:21.069 /dev/nbd12 00:07:21.069 13:11:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:21.069 13:11:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:21.069 13:11:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:21.069 13:11:35 -- common/autotest_common.sh@867 -- # local i 00:07:21.069 13:11:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.069 13:11:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.069 13:11:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:21.069 13:11:35 -- common/autotest_common.sh@871 -- # break 00:07:21.069 13:11:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.069 13:11:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.069 13:11:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.069 1+0 records in 00:07:21.069 1+0 records out 00:07:21.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069316 s, 5.9 MB/s 00:07:21.069 13:11:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.069 13:11:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.069 13:11:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.069 13:11:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.069 13:11:35 -- common/autotest_common.sh@887 -- # return 0 00:07:21.069 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.069 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:21.069 13:11:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:21.330 /dev/nbd13 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:21.330 13:11:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:21.330 13:11:35 -- common/autotest_common.sh@867 -- # local i 00:07:21.330 13:11:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.330 13:11:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.330 13:11:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:21.330 13:11:35 -- common/autotest_common.sh@871 -- # break 00:07:21.330 13:11:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.330 13:11:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.330 13:11:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.330 1+0 records in 00:07:21.330 1+0 records out 00:07:21.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326201 s, 12.6 MB/s 00:07:21.330 13:11:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.330 13:11:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.330 13:11:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.330 13:11:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.330 13:11:35 -- common/autotest_common.sh@887 -- # return 0 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.330 13:11:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.588 13:11:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.588 { 00:07:21.588 "nbd_device": "/dev/nbd0", 00:07:21.588 "bdev_name": "Nvme0n1" 00:07:21.588 }, 00:07:21.588 { 00:07:21.589 "nbd_device": "/dev/nbd1", 00:07:21.589 "bdev_name": "Nvme1n1" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd10", 00:07:21.589 "bdev_name": "Nvme2n1" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd11", 00:07:21.589 "bdev_name": "Nvme2n2" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd12", 00:07:21.589 "bdev_name": "Nvme2n3" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd13", 00:07:21.589 "bdev_name": "Nvme3n1" 00:07:21.589 } 00:07:21.589 ]' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd0", 00:07:21.589 "bdev_name": "Nvme0n1" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd1", 00:07:21.589 "bdev_name": "Nvme1n1" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd10", 00:07:21.589 "bdev_name": "Nvme2n1" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd11", 00:07:21.589 "bdev_name": "Nvme2n2" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd12", 00:07:21.589 "bdev_name": "Nvme2n3" 00:07:21.589 }, 00:07:21.589 { 00:07:21.589 "nbd_device": "/dev/nbd13", 00:07:21.589 "bdev_name": "Nvme3n1" 00:07:21.589 } 00:07:21.589 ]' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.589 /dev/nbd1 00:07:21.589 /dev/nbd10 00:07:21.589 /dev/nbd11 00:07:21.589 /dev/nbd12 00:07:21.589 /dev/nbd13' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.589 /dev/nbd1 00:07:21.589 /dev/nbd10 00:07:21.589 /dev/nbd11 00:07:21.589 /dev/nbd12 00:07:21.589 /dev/nbd13' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@65 -- # count=6 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@95 -- # count=6 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:21.589 256+0 records in 00:07:21.589 256+0 records out 00:07:21.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560021 s, 187 MB/s 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.589 13:11:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.589 256+0 records in 00:07:21.589 256+0 records out 00:07:21.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0848252 s, 12.4 MB/s 00:07:21.589 13:11:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.589 13:11:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.849 256+0 records in 00:07:21.849 256+0 records out 00:07:21.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142365 s, 7.4 MB/s 00:07:21.849 13:11:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.849 13:11:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:22.111 256+0 records in 00:07:22.111 256+0 records out 00:07:22.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200778 s, 5.2 MB/s 00:07:22.111 13:11:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.111 13:11:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:22.111 256+0 records in 00:07:22.111 256+0 records out 00:07:22.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225147 s, 4.7 MB/s 00:07:22.111 13:11:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.111 13:11:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:22.371 256+0 records in 00:07:22.371 256+0 records out 00:07:22.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127207 s, 8.2 MB/s 00:07:22.371 13:11:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.371 13:11:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:22.632 256+0 records in 00:07:22.632 256+0 records out 00:07:22.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223111 s, 4.7 MB/s 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.632 13:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@51 -- # local i 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.633 13:11:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@41 -- # break 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@41 -- # break 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.894 13:11:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@41 -- # break 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.170 13:11:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:23.453 13:11:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@41 -- # break 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.454 13:11:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@41 -- # break 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.454 13:11:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@41 -- # break 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.713 13:11:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@65 -- # true 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.973 13:11:38 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:23.973 13:11:38 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:24.235 malloc_lvol_verify 00:07:24.235 13:11:38 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:24.494 7eae8d28-b0e7-4739-b791-9649b2790c7b 00:07:24.494 13:11:38 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:24.494 269e0e2d-883e-4f9b-83e1-19681049270a 00:07:24.494 13:11:39 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:24.752 /dev/nbd0 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:24.752 mke2fs 1.47.0 (5-Feb-2023) 00:07:24.752 Discarding device blocks: 0/4096 done 00:07:24.752 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:24.752 00:07:24.752 Allocating group tables: 0/1 done 00:07:24.752 Writing inode tables: 0/1 done 00:07:24.752 Creating journal (1024 blocks): done 00:07:24.752 Writing superblocks and filesystem accounting information: 0/1 done 00:07:24.752 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@51 -- # local i 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.752 13:11:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@41 -- # break 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:25.010 13:11:39 -- bdev/nbd_common.sh@147 -- # return 0 00:07:25.010 13:11:39 -- bdev/blockdev.sh@324 -- # killprocess 60326 00:07:25.010 13:11:39 -- common/autotest_common.sh@936 -- # '[' -z 60326 ']' 00:07:25.010 13:11:39 -- common/autotest_common.sh@940 -- # kill -0 60326 00:07:25.010 13:11:39 -- common/autotest_common.sh@941 -- # uname 00:07:25.010 13:11:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.010 13:11:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60326 00:07:25.010 13:11:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.010 13:11:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.010 killing process with pid 60326 00:07:25.010 13:11:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60326' 00:07:25.010 13:11:39 -- common/autotest_common.sh@955 -- # kill 60326 00:07:25.010 13:11:39 -- common/autotest_common.sh@960 -- # wait 60326 00:07:25.578 13:11:40 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:25.578 00:07:25.578 real 0m10.189s 00:07:25.578 user 0m14.056s 00:07:25.578 sys 0m3.014s 00:07:25.578 ************************************ 00:07:25.578 END TEST bdev_nbd 00:07:25.578 ************************************ 00:07:25.578 13:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.578 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.839 skipping fio tests on NVMe due to multi-ns failures. 00:07:25.839 13:11:40 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:25.839 13:11:40 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:25.839 13:11:40 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:25.839 13:11:40 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:25.839 13:11:40 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.839 13:11:40 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:25.839 13:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.839 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.839 ************************************ 00:07:25.839 START TEST bdev_verify 00:07:25.839 ************************************ 00:07:25.839 13:11:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.839 [2024-12-16 13:11:40.244270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.839 [2024-12-16 13:11:40.244377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 00:07:25.839 [2024-12-16 13:11:40.390866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.100 [2024-12-16 13:11:40.567018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.100 [2024-12-16 13:11:40.567097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.672 Running I/O for 5 seconds... 00:07:31.942 00:07:31.942 Latency(us) 00:07:31.942 [2024-12-16T13:11:46.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x0 length 0xbd0bd 00:07:31.942 Nvme0n1 : 5.04 3507.78 13.70 0.00 0.00 36367.42 9527.93 62511.26 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:31.942 Nvme0n1 : 5.04 3416.49 13.35 0.00 0.00 37355.13 8620.50 69770.63 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x0 length 0xa0000 00:07:31.942 Nvme1n1 : 5.05 3505.26 13.69 0.00 0.00 36354.71 12502.25 60494.77 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0xa0000 length 0xa0000 00:07:31.942 Nvme1n1 : 5.04 3414.88 13.34 0.00 0.00 37338.89 9427.10 67350.84 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x0 length 0x80000 00:07:31.942 Nvme2n1 : 5.05 3508.75 13.71 0.00 0.00 36242.44 5520.15 53638.70 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x80000 length 0x80000 00:07:31.942 Nvme2n1 : 5.05 3418.28 13.35 0.00 0.00 37214.72 3037.34 57671.68 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x0 length 0x80000 00:07:31.942 Nvme2n2 : 5.06 3514.77 13.73 0.00 0.00 36153.78 2671.85 52832.10 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x80000 length 0x80000 00:07:31.942 Nvme2n2 : 5.05 3424.42 13.38 0.00 0.00 37129.45 3276.80 54848.59 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x0 length 0x80000 00:07:31.942 Nvme2n3 : 5.06 3513.94 13.73 0.00 0.00 36125.11 3453.24 55655.19 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x80000 length 0x80000 00:07:31.942 Nvme2n3 : 5.06 3423.35 13.37 0.00 0.00 37091.17 4083.40 52025.50 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x0 length 0x20000 00:07:31.942 Nvme3n1 : 5.06 3513.13 13.72 0.00 0.00 36106.31 3856.54 55251.89 00:07:31.942 [2024-12-16T13:11:46.516Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.942 Verification LBA range: start 0x20000 length 0x20000 00:07:31.942 Nvme3n1 : 5.06 3428.81 13.39 0.00 0.00 36968.42 3327.21 48395.82 00:07:31.942 [2024-12-16T13:11:46.516Z] =================================================================================================================== 00:07:31.942 [2024-12-16T13:11:46.516Z] Total : 41589.85 162.46 0.00 0.00 36697.34 2671.85 69770.63 00:07:58.514 00:07:58.514 real 0m29.025s 00:07:58.514 user 0m56.773s 00:07:58.514 sys 0m0.329s 00:07:58.514 13:12:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.514 ************************************ 00:07:58.514 END TEST bdev_verify 00:07:58.514 ************************************ 00:07:58.514 13:12:09 -- common/autotest_common.sh@10 -- # set +x 00:07:58.514 13:12:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.514 13:12:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:58.514 13:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.514 13:12:09 -- common/autotest_common.sh@10 -- # set +x 00:07:58.514 ************************************ 00:07:58.514 START TEST bdev_verify_big_io 00:07:58.514 ************************************ 00:07:58.514 13:12:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.514 [2024-12-16 13:12:09.306597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.514 [2024-12-16 13:12:09.306720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60875 ] 00:07:58.514 [2024-12-16 13:12:09.453170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.514 [2024-12-16 13:12:09.649952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.514 [2024-12-16 13:12:09.650054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.514 Running I/O for 5 seconds... 00:08:01.815 00:08:01.815 Latency(us) 00:08:01.815 [2024-12-16T13:12:16.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x0 length 0xbd0b 00:08:01.815 Nvme0n1 : 5.36 275.51 17.22 0.00 0.00 457582.93 14619.57 793691.37 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:01.815 Nvme0n1 : 5.34 319.75 19.98 0.00 0.00 393151.21 42547.99 593655.34 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x0 length 0xa000 00:08:01.815 Nvme1n1 : 5.37 275.42 17.21 0.00 0.00 450082.00 15123.69 722710.84 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0xa000 length 0xa000 00:08:01.815 Nvme1n1 : 5.34 319.66 19.98 0.00 0.00 388493.78 43152.94 542033.13 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x0 length 0x8000 00:08:01.815 Nvme2n1 : 5.37 275.34 17.21 0.00 0.00 442509.86 15728.64 645277.54 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x8000 length 0x8000 00:08:01.815 Nvme2n1 : 5.35 326.99 20.44 0.00 0.00 378238.41 3453.24 490410.93 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x0 length 0x8000 00:08:01.815 Nvme2n2 : 5.38 283.16 17.70 0.00 0.00 423854.59 11897.30 567844.23 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x8000 length 0x8000 00:08:01.815 Nvme2n2 : 5.35 326.89 20.43 0.00 0.00 373718.46 4108.60 442015.11 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x0 length 0x8000 00:08:01.815 Nvme2n3 : 5.41 297.56 18.60 0.00 0.00 396478.49 15930.29 493637.32 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x8000 length 0x8000 00:08:01.815 Nvme2n3 : 5.35 333.89 20.87 0.00 0.00 362347.93 2571.03 392006.10 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x0 length 0x2000 00:08:01.815 Nvme3n1 : 5.46 342.56 21.41 0.00 0.00 340038.46 516.73 425883.18 00:08:01.815 [2024-12-16T13:12:16.389Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:01.815 Verification LBA range: start 0x2000 length 0x2000 00:08:01.815 Nvme3n1 : 5.35 333.80 20.86 0.00 0.00 357985.74 2923.91 341997.10 00:08:01.815 [2024-12-16T13:12:16.389Z] =================================================================================================================== 00:08:01.815 [2024-12-16T13:12:16.389Z] Total : 3710.53 231.91 0.00 0.00 394113.63 516.73 793691.37 00:08:03.720 00:08:03.720 real 0m8.778s 00:08:03.720 user 0m16.477s 00:08:03.720 sys 0m0.218s 00:08:03.720 13:12:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.720 13:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 ************************************ 00:08:03.720 END TEST bdev_verify_big_io 00:08:03.720 ************************************ 00:08:03.720 13:12:18 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.720 13:12:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:03.720 13:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.720 13:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 ************************************ 00:08:03.720 START TEST bdev_write_zeroes 00:08:03.720 ************************************ 00:08:03.720 13:12:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.720 [2024-12-16 13:12:18.150037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.720 [2024-12-16 13:12:18.150143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60995 ] 00:08:03.982 [2024-12-16 13:12:18.299968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.982 [2024-12-16 13:12:18.474352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.552 Running I/O for 1 seconds... 00:08:05.491 00:08:05.491 Latency(us) 00:08:05.491 [2024-12-16T13:12:20.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.491 [2024-12-16T13:12:20.065Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.491 Nvme0n1 : 1.01 11922.97 46.57 0.00 0.00 10708.60 5671.38 18955.03 00:08:05.491 [2024-12-16T13:12:20.065Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.491 Nvme1n1 : 1.02 11908.22 46.52 0.00 0.00 10709.22 7057.72 19055.85 00:08:05.491 [2024-12-16T13:12:20.065Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.491 Nvme2n1 : 1.02 11894.51 46.46 0.00 0.00 10673.45 6956.90 19459.15 00:08:05.491 [2024-12-16T13:12:20.065Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.491 Nvme2n2 : 1.02 11881.09 46.41 0.00 0.00 10655.72 6956.90 18955.03 00:08:05.491 [2024-12-16T13:12:20.065Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.491 Nvme2n3 : 1.02 11922.45 46.57 0.00 0.00 10617.70 6503.19 18652.55 00:08:05.491 [2024-12-16T13:12:20.065Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:05.491 Nvme3n1 : 1.02 11908.83 46.52 0.00 0.00 10599.73 5318.50 18753.38 00:08:05.491 [2024-12-16T13:12:20.065Z] =================================================================================================================== 00:08:05.491 [2024-12-16T13:12:20.065Z] Total : 71438.07 279.05 0.00 0.00 10660.64 5318.50 19459.15 00:08:06.434 00:08:06.434 real 0m2.780s 00:08:06.434 user 0m2.470s 00:08:06.434 sys 0m0.192s 00:08:06.434 ************************************ 00:08:06.434 END TEST bdev_write_zeroes 00:08:06.434 ************************************ 00:08:06.434 13:12:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.434 13:12:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.434 13:12:20 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.434 13:12:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:06.434 13:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.434 13:12:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.434 ************************************ 00:08:06.434 START TEST bdev_json_nonenclosed 00:08:06.434 ************************************ 00:08:06.434 13:12:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.434 [2024-12-16 13:12:20.995461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.434 [2024-12-16 13:12:20.995563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:08:06.696 [2024-12-16 13:12:21.144562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.958 [2024-12-16 13:12:21.319673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.958 [2024-12-16 13:12:21.319816] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:06.958 [2024-12-16 13:12:21.319834] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.219 00:08:07.219 real 0m0.663s 00:08:07.219 user 0m0.465s 00:08:07.219 sys 0m0.093s 00:08:07.219 13:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.219 ************************************ 00:08:07.219 END TEST bdev_json_nonenclosed 00:08:07.219 ************************************ 00:08:07.219 13:12:21 -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 13:12:21 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.219 13:12:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:07.219 13:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.219 13:12:21 -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 ************************************ 00:08:07.219 START TEST bdev_json_nonarray 00:08:07.219 ************************************ 00:08:07.219 13:12:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.219 [2024-12-16 13:12:21.726860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.219 [2024-12-16 13:12:21.726966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61074 ] 00:08:07.481 [2024-12-16 13:12:21.875963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.742 [2024-12-16 13:12:22.053361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.742 [2024-12-16 13:12:22.053507] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:07.742 [2024-12-16 13:12:22.053524] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.003 00:08:08.003 real 0m0.669s 00:08:08.003 user 0m0.473s 00:08:08.003 sys 0m0.091s 00:08:08.003 13:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.003 13:12:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.003 ************************************ 00:08:08.003 END TEST bdev_json_nonarray 00:08:08.003 ************************************ 00:08:08.003 13:12:22 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:08.003 13:12:22 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:08.003 13:12:22 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:08.003 13:12:22 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:08.003 13:12:22 -- bdev/blockdev.sh@809 -- # cleanup 00:08:08.003 13:12:22 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:08.003 13:12:22 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:08.003 13:12:22 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:08.003 13:12:22 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:08.003 13:12:22 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:08.003 13:12:22 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:08.003 00:08:08.003 real 1m0.660s 00:08:08.003 user 1m42.910s 00:08:08.003 sys 0m5.178s 00:08:08.003 13:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.003 ************************************ 00:08:08.003 END TEST blockdev_nvme 00:08:08.003 ************************************ 00:08:08.003 13:12:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.003 13:12:22 -- spdk/autotest.sh@206 -- # uname -s 00:08:08.003 13:12:22 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:08.003 13:12:22 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:08.003 13:12:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.003 13:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.003 13:12:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.003 ************************************ 00:08:08.003 START TEST blockdev_nvme_gpt 00:08:08.003 ************************************ 00:08:08.003 13:12:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:08.003 * Looking for test storage... 00:08:08.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:08.003 13:12:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.003 13:12:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.003 13:12:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.264 13:12:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.264 13:12:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.264 13:12:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.264 13:12:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.264 13:12:22 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.264 13:12:22 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.264 13:12:22 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.264 13:12:22 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.264 13:12:22 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.264 13:12:22 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.264 13:12:22 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.264 13:12:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.264 13:12:22 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.264 13:12:22 -- scripts/common.sh@344 -- # : 1 00:08:08.264 13:12:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.264 13:12:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.264 13:12:22 -- scripts/common.sh@364 -- # decimal 1 00:08:08.264 13:12:22 -- scripts/common.sh@352 -- # local d=1 00:08:08.264 13:12:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.264 13:12:22 -- scripts/common.sh@354 -- # echo 1 00:08:08.264 13:12:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.264 13:12:22 -- scripts/common.sh@365 -- # decimal 2 00:08:08.264 13:12:22 -- scripts/common.sh@352 -- # local d=2 00:08:08.264 13:12:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.264 13:12:22 -- scripts/common.sh@354 -- # echo 2 00:08:08.265 13:12:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.265 13:12:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.265 13:12:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.265 13:12:22 -- scripts/common.sh@367 -- # return 0 00:08:08.265 13:12:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.265 13:12:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.265 --rc genhtml_branch_coverage=1 00:08:08.265 --rc genhtml_function_coverage=1 00:08:08.265 --rc genhtml_legend=1 00:08:08.265 --rc geninfo_all_blocks=1 00:08:08.265 --rc geninfo_unexecuted_blocks=1 00:08:08.265 00:08:08.265 ' 00:08:08.265 13:12:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.265 --rc genhtml_branch_coverage=1 00:08:08.265 --rc genhtml_function_coverage=1 00:08:08.265 --rc genhtml_legend=1 00:08:08.265 --rc geninfo_all_blocks=1 00:08:08.265 --rc geninfo_unexecuted_blocks=1 00:08:08.265 00:08:08.265 ' 00:08:08.265 13:12:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.265 --rc genhtml_branch_coverage=1 00:08:08.265 --rc genhtml_function_coverage=1 00:08:08.265 --rc genhtml_legend=1 00:08:08.265 --rc geninfo_all_blocks=1 00:08:08.265 --rc geninfo_unexecuted_blocks=1 00:08:08.265 00:08:08.265 ' 00:08:08.265 13:12:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.265 --rc genhtml_branch_coverage=1 00:08:08.265 --rc genhtml_function_coverage=1 00:08:08.265 --rc genhtml_legend=1 00:08:08.265 --rc geninfo_all_blocks=1 00:08:08.265 --rc geninfo_unexecuted_blocks=1 00:08:08.265 00:08:08.265 ' 00:08:08.265 13:12:22 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:08.265 13:12:22 -- bdev/nbd_common.sh@6 -- # set -e 00:08:08.265 13:12:22 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:08.265 13:12:22 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:08.265 13:12:22 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:08.265 13:12:22 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:08.265 13:12:22 -- bdev/blockdev.sh@18 -- # : 00:08:08.265 13:12:22 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:08.265 13:12:22 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:08.265 13:12:22 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:08.265 13:12:22 -- bdev/blockdev.sh@672 -- # uname -s 00:08:08.265 13:12:22 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:08.265 13:12:22 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:08.265 13:12:22 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:08.265 13:12:22 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:08.265 13:12:22 -- bdev/blockdev.sh@682 -- # dek= 00:08:08.265 13:12:22 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:08.265 13:12:22 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:08.265 13:12:22 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:08.265 13:12:22 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:08.265 13:12:22 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:08.265 13:12:22 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:08.265 13:12:22 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61151 00:08:08.265 13:12:22 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:08.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.265 13:12:22 -- bdev/blockdev.sh@47 -- # waitforlisten 61151 00:08:08.265 13:12:22 -- common/autotest_common.sh@829 -- # '[' -z 61151 ']' 00:08:08.265 13:12:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.265 13:12:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.265 13:12:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.265 13:12:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.265 13:12:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.265 13:12:22 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:08.265 [2024-12-16 13:12:22.686669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.265 [2024-12-16 13:12:22.686773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:08:08.265 [2024-12-16 13:12:22.833953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.526 [2024-12-16 13:12:23.009998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.526 [2024-12-16 13:12:23.010205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.913 13:12:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.913 13:12:24 -- common/autotest_common.sh@862 -- # return 0 00:08:09.913 13:12:24 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:09.913 13:12:24 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:09.913 13:12:24 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:10.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:10.175 Waiting for block devices as requested 00:08:10.175 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.434 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.434 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.434 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:15.833 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:15.833 13:12:30 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:15.833 13:12:30 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:15.833 13:12:30 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:15.833 13:12:30 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:15.833 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.833 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:15.833 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:15.833 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:15.833 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.833 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.833 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:15.833 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:15.833 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:15.833 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.833 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.833 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:15.833 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:15.833 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.834 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:15.834 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:15.834 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.834 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:15.834 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:15.834 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.834 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:15.834 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:15.834 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:15.834 13:12:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:15.834 13:12:30 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:15.834 13:12:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:15.834 13:12:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:15.834 13:12:30 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:15.834 13:12:30 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:15.834 13:12:30 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:15.834 13:12:30 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:15.834 13:12:30 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:15.834 13:12:30 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:15.834 13:12:30 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:15.834 13:12:30 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:15.834 BYT; 00:08:15.834 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:15.834 13:12:30 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:15.834 BYT; 00:08:15.834 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:15.834 13:12:30 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:15.834 13:12:30 -- bdev/blockdev.sh@114 -- # break 00:08:15.834 13:12:30 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:15.834 13:12:30 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:15.834 13:12:30 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:15.834 13:12:30 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:15.834 13:12:30 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:15.834 13:12:30 -- scripts/common.sh@410 -- # local spdk_guid 00:08:15.834 13:12:30 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:15.834 13:12:30 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:15.834 13:12:30 -- scripts/common.sh@415 -- # IFS='()' 00:08:15.834 13:12:30 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:15.834 13:12:30 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:15.834 13:12:30 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:15.834 13:12:30 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:15.834 13:12:30 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:15.834 13:12:30 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:15.834 13:12:30 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:15.834 13:12:30 -- scripts/common.sh@422 -- # local spdk_guid 00:08:15.834 13:12:30 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:15.834 13:12:30 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:15.834 13:12:30 -- scripts/common.sh@427 -- # IFS='()' 00:08:15.834 13:12:30 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:15.834 13:12:30 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:15.834 13:12:30 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:15.834 13:12:30 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:15.834 13:12:30 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:15.834 13:12:30 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:15.834 13:12:30 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:16.766 The operation has completed successfully. 00:08:16.766 13:12:31 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:17.700 The operation has completed successfully. 00:08:17.700 13:12:32 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:18.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:18.634 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.634 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.634 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.634 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.634 13:12:33 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:18.634 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.634 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:18.634 [] 00:08:18.634 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.634 13:12:33 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:18.634 13:12:33 -- bdev/blockdev.sh@79 -- # local json 00:08:18.634 13:12:33 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:18.634 13:12:33 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:18.892 13:12:33 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:18.892 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.892 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.151 13:12:33 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:19.151 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.151 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.151 13:12:33 -- bdev/blockdev.sh@738 -- # cat 00:08:19.151 13:12:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:19.151 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.151 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.151 13:12:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:19.151 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.151 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.151 13:12:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:19.151 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.151 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.151 13:12:33 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:19.151 13:12:33 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:19.151 13:12:33 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:19.151 13:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.151 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 13:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.151 13:12:33 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:19.151 13:12:33 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "592b99f2-58ee-48d1-9811-9efc8fc219c3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "592b99f2-58ee-48d1-9811-9efc8fc219c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "823cc959-c395-4a0c-979e-77b4c260d059"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "823cc959-c395-4a0c-979e-77b4c260d059",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c624eeaf-fe78-4022-820f-400c53d4feab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c624eeaf-fe78-4022-820f-400c53d4feab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "77a61311-ea68-4b3c-bc3d-682f78388f4b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "77a61311-ea68-4b3c-bc3d-682f78388f4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3b34892a-d268-4000-95bd-46984d7fb7e2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3b34892a-d268-4000-95bd-46984d7fb7e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:19.151 13:12:33 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:19.151 13:12:33 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:19.151 13:12:33 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:19.151 13:12:33 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:19.151 13:12:33 -- bdev/blockdev.sh@752 -- # killprocess 61151 00:08:19.151 13:12:33 -- common/autotest_common.sh@936 -- # '[' -z 61151 ']' 00:08:19.151 13:12:33 -- common/autotest_common.sh@940 -- # kill -0 61151 00:08:19.151 13:12:33 -- common/autotest_common.sh@941 -- # uname 00:08:19.151 13:12:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.151 13:12:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61151 00:08:19.151 13:12:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.151 13:12:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.151 killing process with pid 61151 00:08:19.151 13:12:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61151' 00:08:19.151 13:12:33 -- common/autotest_common.sh@955 -- # kill 61151 00:08:19.152 13:12:33 -- common/autotest_common.sh@960 -- # wait 61151 00:08:20.527 13:12:34 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:20.528 13:12:34 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:20.528 13:12:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:20.528 13:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.528 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.528 ************************************ 00:08:20.528 START TEST bdev_hello_world 00:08:20.528 ************************************ 00:08:20.528 13:12:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:20.528 [2024-12-16 13:12:34.890619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.528 [2024-12-16 13:12:34.890714] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 00:08:20.528 [2024-12-16 13:12:35.033524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.785 [2024-12-16 13:12:35.170372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.351 [2024-12-16 13:12:35.636661] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:21.351 [2024-12-16 13:12:35.636700] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:21.351 [2024-12-16 13:12:35.636714] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:21.351 [2024-12-16 13:12:35.638618] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:21.351 [2024-12-16 13:12:35.639051] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:21.351 [2024-12-16 13:12:35.639076] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:21.351 [2024-12-16 13:12:35.639237] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:21.351 00:08:21.351 [2024-12-16 13:12:35.639256] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:21.920 00:08:21.920 real 0m1.418s 00:08:21.920 user 0m1.150s 00:08:21.920 sys 0m0.161s 00:08:21.920 13:12:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.920 ************************************ 00:08:21.920 END TEST bdev_hello_world 00:08:21.920 ************************************ 00:08:21.920 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.920 13:12:36 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:21.920 13:12:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.920 13:12:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.920 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.920 ************************************ 00:08:21.920 START TEST bdev_bounds 00:08:21.920 ************************************ 00:08:21.920 13:12:36 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:21.920 13:12:36 -- bdev/blockdev.sh@288 -- # bdevio_pid=61842 00:08:21.920 13:12:36 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.920 Process bdevio pid: 61842 00:08:21.920 13:12:36 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61842' 00:08:21.920 13:12:36 -- bdev/blockdev.sh@291 -- # waitforlisten 61842 00:08:21.920 13:12:36 -- common/autotest_common.sh@829 -- # '[' -z 61842 ']' 00:08:21.920 13:12:36 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:21.920 13:12:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.920 13:12:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.920 13:12:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.920 13:12:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.920 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.920 [2024-12-16 13:12:36.375855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.920 [2024-12-16 13:12:36.376279] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61842 ] 00:08:22.179 [2024-12-16 13:12:36.523556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.179 [2024-12-16 13:12:36.674558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.179 [2024-12-16 13:12:36.674858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.179 [2024-12-16 13:12:36.674939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.747 13:12:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.747 13:12:37 -- common/autotest_common.sh@862 -- # return 0 00:08:22.747 13:12:37 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:22.747 I/O targets: 00:08:22.747 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:22.747 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:22.747 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:22.747 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.747 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.747 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.747 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:22.747 00:08:22.747 00:08:22.747 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.747 http://cunit.sourceforge.net/ 00:08:22.747 00:08:22.747 00:08:22.747 Suite: bdevio tests on: Nvme3n1 00:08:22.748 Test: blockdev write read block ...passed 00:08:22.748 Test: blockdev write zeroes read block ...passed 00:08:22.748 Test: blockdev write zeroes read no split ...passed 00:08:23.010 Test: blockdev write zeroes read split ...passed 00:08:23.010 Test: blockdev write zeroes read split partial ...passed 00:08:23.010 Test: blockdev reset ...[2024-12-16 13:12:37.349593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:23.010 [2024-12-16 13:12:37.354219] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.010 passed 00:08:23.010 Test: blockdev write read 8 blocks ...passed 00:08:23.010 Test: blockdev write read size > 128k ...passed 00:08:23.010 Test: blockdev write read invalid size ...passed 00:08:23.010 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.010 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.010 Test: blockdev write read max offset ...passed 00:08:23.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.010 Test: blockdev writev readv 8 blocks ...passed 00:08:23.010 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.010 Test: blockdev writev readv block ...passed 00:08:23.010 Test: blockdev writev readv size > 128k ...passed 00:08:23.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.010 Test: blockdev comparev and writev ...[2024-12-16 13:12:37.368804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27300a000 len:0x1000 00:08:23.010 [2024-12-16 13:12:37.368866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.010 passed 00:08:23.010 Test: blockdev nvme passthru rw ...passed 00:08:23.010 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:12:37.370204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.010 [2024-12-16 13:12:37.370237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.010 passed 00:08:23.010 Test: blockdev nvme admin passthru ...passed 00:08:23.010 Test: blockdev copy ...passed 00:08:23.010 Suite: bdevio tests on: Nvme2n3 00:08:23.010 Test: blockdev write read block ...passed 00:08:23.010 Test: blockdev write zeroes read block ...passed 00:08:23.010 Test: blockdev write zeroes read no split ...passed 00:08:23.010 Test: blockdev write zeroes read split ...passed 00:08:23.010 Test: blockdev write zeroes read split partial ...passed 00:08:23.010 Test: blockdev reset ...[2024-12-16 13:12:37.436273] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:23.010 [2024-12-16 13:12:37.440864] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.010 passed 00:08:23.010 Test: blockdev write read 8 blocks ...passed 00:08:23.010 Test: blockdev write read size > 128k ...passed 00:08:23.010 Test: blockdev write read invalid size ...passed 00:08:23.010 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.010 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.010 Test: blockdev write read max offset ...passed 00:08:23.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.010 Test: blockdev writev readv 8 blocks ...passed 00:08:23.010 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.010 Test: blockdev writev readv block ...passed 00:08:23.010 Test: blockdev writev readv size > 128k ...passed 00:08:23.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.010 Test: blockdev comparev and writev ...[2024-12-16 13:12:37.460955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ef04000 len:0x1000 00:08:23.010 [2024-12-16 13:12:37.461011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.010 passed 00:08:23.010 Test: blockdev nvme passthru rw ...passed 00:08:23.010 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.010 Test: blockdev nvme admin passthru ...[2024-12-16 13:12:37.463415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.010 [2024-12-16 13:12:37.463455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.010 passed 00:08:23.010 Test: blockdev copy ...passed 00:08:23.010 Suite: bdevio tests on: Nvme2n2 00:08:23.010 Test: blockdev write read block ...passed 00:08:23.010 Test: blockdev write zeroes read block ...passed 00:08:23.010 Test: blockdev write zeroes read no split ...passed 00:08:23.010 Test: blockdev write zeroes read split ...passed 00:08:23.010 Test: blockdev write zeroes read split partial ...passed 00:08:23.010 Test: blockdev reset ...[2024-12-16 13:12:37.523620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:23.010 [2024-12-16 13:12:37.526909] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.010 passed 00:08:23.010 Test: blockdev write read 8 blocks ...passed 00:08:23.010 Test: blockdev write read size > 128k ...passed 00:08:23.010 Test: blockdev write read invalid size ...passed 00:08:23.010 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.010 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.010 Test: blockdev write read max offset ...passed 00:08:23.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.010 Test: blockdev writev readv 8 blocks ...passed 00:08:23.010 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.010 Test: blockdev writev readv block ...passed 00:08:23.010 Test: blockdev writev readv size > 128k ...passed 00:08:23.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.010 Test: blockdev comparev and writev ...[2024-12-16 13:12:37.542642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ef04000 len:0x1000 00:08:23.010 [2024-12-16 13:12:37.542695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.010 passed 00:08:23.010 Test: blockdev nvme passthru rw ...passed 00:08:23.010 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.010 Test: blockdev nvme admin passthru ...[2024-12-16 13:12:37.545006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.010 [2024-12-16 13:12:37.545046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.010 passed 00:08:23.010 Test: blockdev copy ...passed 00:08:23.010 Suite: bdevio tests on: Nvme2n1 00:08:23.010 Test: blockdev write read block ...passed 00:08:23.010 Test: blockdev write zeroes read block ...passed 00:08:23.010 Test: blockdev write zeroes read no split ...passed 00:08:23.271 Test: blockdev write zeroes read split ...passed 00:08:23.271 Test: blockdev write zeroes read split partial ...passed 00:08:23.271 Test: blockdev reset ...[2024-12-16 13:12:37.609807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:23.271 [2024-12-16 13:12:37.612931] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.271 passed 00:08:23.271 Test: blockdev write read 8 blocks ...passed 00:08:23.271 Test: blockdev write read size > 128k ...passed 00:08:23.271 Test: blockdev write read invalid size ...passed 00:08:23.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.271 Test: blockdev write read max offset ...passed 00:08:23.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.271 Test: blockdev writev readv 8 blocks ...passed 00:08:23.271 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.271 Test: blockdev writev readv block ...passed 00:08:23.271 Test: blockdev writev readv size > 128k ...passed 00:08:23.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.271 Test: blockdev comparev and writev ...[2024-12-16 13:12:37.631321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x286c3c000 len:0x1000 00:08:23.271 [2024-12-16 13:12:37.631373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.271 passed 00:08:23.271 Test: blockdev nvme passthru rw ...passed 00:08:23.271 Test: blockdev nvme passthru vendor specific ...[2024-12-16 13:12:37.633151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.271 passed 00:08:23.271 Test: blockdev nvme admin passthru ...[2024-12-16 13:12:37.633188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.271 passed 00:08:23.271 Test: blockdev copy ...passed 00:08:23.271 Suite: bdevio tests on: Nvme1n1 00:08:23.271 Test: blockdev write read block ...passed 00:08:23.271 Test: blockdev write zeroes read block ...passed 00:08:23.271 Test: blockdev write zeroes read no split ...passed 00:08:23.271 Test: blockdev write zeroes read split ...passed 00:08:23.271 Test: blockdev write zeroes read split partial ...passed 00:08:23.271 Test: blockdev reset ...[2024-12-16 13:12:37.696341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:23.271 [2024-12-16 13:12:37.699774] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.271 passed 00:08:23.271 Test: blockdev write read 8 blocks ...passed 00:08:23.271 Test: blockdev write read size > 128k ...passed 00:08:23.271 Test: blockdev write read invalid size ...passed 00:08:23.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.271 Test: blockdev write read max offset ...passed 00:08:23.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.271 Test: blockdev writev readv 8 blocks ...passed 00:08:23.271 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.271 Test: blockdev writev readv block ...passed 00:08:23.271 Test: blockdev writev readv size > 128k ...passed 00:08:23.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.271 Test: blockdev comparev and writev ...[2024-12-16 13:12:37.715464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x286c38000 len:0x1000 00:08:23.271 [2024-12-16 13:12:37.715517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.271 passed 00:08:23.271 Test: blockdev nvme passthru rw ...passed 00:08:23.271 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.271 Test: blockdev nvme admin passthru ...[2024-12-16 13:12:37.716564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.271 [2024-12-16 13:12:37.716599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.271 passed 00:08:23.271 Test: blockdev copy ...passed 00:08:23.271 Suite: bdevio tests on: Nvme0n1p2 00:08:23.271 Test: blockdev write read block ...passed 00:08:23.271 Test: blockdev write zeroes read block ...passed 00:08:23.271 Test: blockdev write zeroes read no split ...passed 00:08:23.271 Test: blockdev write zeroes read split ...passed 00:08:23.271 Test: blockdev write zeroes read split partial ...passed 00:08:23.271 Test: blockdev reset ...[2024-12-16 13:12:37.784416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:23.271 [2024-12-16 13:12:37.787368] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.271 passed 00:08:23.271 Test: blockdev write read 8 blocks ...passed 00:08:23.271 Test: blockdev write read size > 128k ...passed 00:08:23.271 Test: blockdev write read invalid size ...passed 00:08:23.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.271 Test: blockdev write read max offset ...passed 00:08:23.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.271 Test: blockdev writev readv 8 blocks ...passed 00:08:23.271 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.271 Test: blockdev writev readv block ...passed 00:08:23.271 Test: blockdev writev readv size > 128k ...passed 00:08:23.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.271 Test: blockdev comparev and writev ...passed 00:08:23.271 Test: blockdev nvme passthru rw ...passed 00:08:23.271 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.271 Test: blockdev nvme admin passthru ...passed 00:08:23.271 Test: blockdev copy ...[2024-12-16 13:12:37.794871] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:23.271 separate metadata which is not supported yet. 00:08:23.271 passed 00:08:23.271 Suite: bdevio tests on: Nvme0n1p1 00:08:23.271 Test: blockdev write read block ...passed 00:08:23.271 Test: blockdev write zeroes read block ...passed 00:08:23.271 Test: blockdev write zeroes read no split ...passed 00:08:23.271 Test: blockdev write zeroes read split ...passed 00:08:23.533 Test: blockdev write zeroes read split partial ...passed 00:08:23.533 Test: blockdev reset ...[2024-12-16 13:12:37.848533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:23.533 [2024-12-16 13:12:37.852364] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.533 passed 00:08:23.533 Test: blockdev write read 8 blocks ...passed 00:08:23.533 Test: blockdev write read size > 128k ...passed 00:08:23.533 Test: blockdev write read invalid size ...passed 00:08:23.533 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.533 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.533 Test: blockdev write read max offset ...passed 00:08:23.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.533 Test: blockdev writev readv 8 blocks ...passed 00:08:23.533 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.533 Test: blockdev writev readv block ...passed 00:08:23.533 Test: blockdev writev readv size > 128k ...passed 00:08:23.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.533 Test: blockdev comparev and writev ...passed 00:08:23.533 Test: blockdev nvme passthru rw ...passed 00:08:23.533 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.533 Test: blockdev nvme admin passthru ...passed 00:08:23.533 Test: blockdev copy ...[2024-12-16 13:12:37.870215] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:23.533 separate metadata which is not supported yet. 00:08:23.533 passed 00:08:23.533 00:08:23.533 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.533 suites 7 7 n/a 0 0 00:08:23.533 tests 161 161 161 0 0 00:08:23.533 asserts 1006 1006 1006 0 n/a 00:08:23.533 00:08:23.533 Elapsed time = 1.472 seconds 00:08:23.533 0 00:08:23.533 13:12:37 -- bdev/blockdev.sh@293 -- # killprocess 61842 00:08:23.533 13:12:37 -- common/autotest_common.sh@936 -- # '[' -z 61842 ']' 00:08:23.533 13:12:37 -- common/autotest_common.sh@940 -- # kill -0 61842 00:08:23.533 13:12:37 -- common/autotest_common.sh@941 -- # uname 00:08:23.533 13:12:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.533 13:12:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61842 00:08:23.533 13:12:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:23.533 13:12:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:23.533 13:12:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61842' 00:08:23.533 killing process with pid 61842 00:08:23.533 13:12:37 -- common/autotest_common.sh@955 -- # kill 61842 00:08:23.533 13:12:37 -- common/autotest_common.sh@960 -- # wait 61842 00:08:24.475 13:12:38 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:24.475 00:08:24.475 real 0m2.367s 00:08:24.475 user 0m5.724s 00:08:24.475 sys 0m0.307s 00:08:24.475 13:12:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.475 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.475 ************************************ 00:08:24.475 END TEST bdev_bounds 00:08:24.475 ************************************ 00:08:24.475 13:12:38 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:24.475 13:12:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:24.475 13:12:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.475 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.475 ************************************ 00:08:24.475 START TEST bdev_nbd 00:08:24.475 ************************************ 00:08:24.475 13:12:38 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:24.475 13:12:38 -- bdev/blockdev.sh@298 -- # uname -s 00:08:24.475 13:12:38 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:24.475 13:12:38 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.475 13:12:38 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:24.475 13:12:38 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.475 13:12:38 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:24.475 13:12:38 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:24.475 13:12:38 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:24.475 13:12:38 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:24.475 13:12:38 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:24.475 13:12:38 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:24.475 13:12:38 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:24.475 13:12:38 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:24.475 13:12:38 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.475 13:12:38 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:24.475 13:12:38 -- bdev/blockdev.sh@316 -- # nbd_pid=61896 00:08:24.475 13:12:38 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:24.475 13:12:38 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:24.475 13:12:38 -- bdev/blockdev.sh@318 -- # waitforlisten 61896 /var/tmp/spdk-nbd.sock 00:08:24.475 13:12:38 -- common/autotest_common.sh@829 -- # '[' -z 61896 ']' 00:08:24.475 13:12:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:24.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:24.475 13:12:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.475 13:12:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:24.475 13:12:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.475 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.475 [2024-12-16 13:12:38.815514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.475 [2024-12-16 13:12:38.815619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.475 [2024-12-16 13:12:38.965357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.736 [2024-12-16 13:12:39.151357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.117 13:12:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.117 13:12:40 -- common/autotest_common.sh@862 -- # return 0 00:08:26.117 13:12:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@24 -- # local i 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:26.117 13:12:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:26.117 13:12:40 -- common/autotest_common.sh@867 -- # local i 00:08:26.117 13:12:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.117 13:12:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.117 13:12:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:26.117 13:12:40 -- common/autotest_common.sh@871 -- # break 00:08:26.117 13:12:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.117 13:12:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.117 13:12:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.117 1+0 records in 00:08:26.117 1+0 records out 00:08:26.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119848 s, 3.4 MB/s 00:08:26.117 13:12:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.117 13:12:40 -- common/autotest_common.sh@884 -- # size=4096 00:08:26.117 13:12:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.117 13:12:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.117 13:12:40 -- common/autotest_common.sh@887 -- # return 0 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.117 13:12:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:26.378 13:12:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:26.378 13:12:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:26.378 13:12:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:26.378 13:12:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:26.378 13:12:40 -- common/autotest_common.sh@867 -- # local i 00:08:26.378 13:12:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.378 13:12:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.378 13:12:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:26.378 13:12:40 -- common/autotest_common.sh@871 -- # break 00:08:26.378 13:12:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.378 13:12:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.378 13:12:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.378 1+0 records in 00:08:26.378 1+0 records out 00:08:26.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615092 s, 6.7 MB/s 00:08:26.378 13:12:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.378 13:12:40 -- common/autotest_common.sh@884 -- # size=4096 00:08:26.378 13:12:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.378 13:12:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.378 13:12:40 -- common/autotest_common.sh@887 -- # return 0 00:08:26.378 13:12:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.378 13:12:40 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.378 13:12:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:26.637 13:12:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:26.637 13:12:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:26.637 13:12:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:26.637 13:12:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:26.637 13:12:41 -- common/autotest_common.sh@867 -- # local i 00:08:26.637 13:12:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.637 13:12:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.637 13:12:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:26.637 13:12:41 -- common/autotest_common.sh@871 -- # break 00:08:26.637 13:12:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.637 13:12:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.637 13:12:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.637 1+0 records in 00:08:26.637 1+0 records out 00:08:26.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971627 s, 4.2 MB/s 00:08:26.637 13:12:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.637 13:12:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:26.637 13:12:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.637 13:12:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.637 13:12:41 -- common/autotest_common.sh@887 -- # return 0 00:08:26.637 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.637 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.637 13:12:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:26.897 13:12:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:26.897 13:12:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:26.897 13:12:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:26.897 13:12:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:26.897 13:12:41 -- common/autotest_common.sh@867 -- # local i 00:08:26.897 13:12:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.897 13:12:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.897 13:12:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:26.897 13:12:41 -- common/autotest_common.sh@871 -- # break 00:08:26.897 13:12:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.897 13:12:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.897 13:12:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.897 1+0 records in 00:08:26.897 1+0 records out 00:08:26.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132045 s, 3.1 MB/s 00:08:26.897 13:12:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.897 13:12:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:26.897 13:12:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.897 13:12:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.897 13:12:41 -- common/autotest_common.sh@887 -- # return 0 00:08:26.897 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.897 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.897 13:12:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:27.158 13:12:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:27.158 13:12:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:27.158 13:12:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:27.158 13:12:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:27.158 13:12:41 -- common/autotest_common.sh@867 -- # local i 00:08:27.158 13:12:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.158 13:12:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.158 13:12:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:27.158 13:12:41 -- common/autotest_common.sh@871 -- # break 00:08:27.158 13:12:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.158 13:12:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.158 13:12:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.158 1+0 records in 00:08:27.158 1+0 records out 00:08:27.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112183 s, 3.7 MB/s 00:08:27.158 13:12:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.158 13:12:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.158 13:12:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.158 13:12:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.158 13:12:41 -- common/autotest_common.sh@887 -- # return 0 00:08:27.158 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:27.158 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:27.158 13:12:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:27.418 13:12:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:27.418 13:12:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:27.418 13:12:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:27.418 13:12:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:27.418 13:12:41 -- common/autotest_common.sh@867 -- # local i 00:08:27.418 13:12:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.418 13:12:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.418 13:12:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:27.418 13:12:41 -- common/autotest_common.sh@871 -- # break 00:08:27.418 13:12:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.418 13:12:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.418 13:12:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.418 1+0 records in 00:08:27.418 1+0 records out 00:08:27.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128681 s, 3.2 MB/s 00:08:27.418 13:12:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.418 13:12:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.418 13:12:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.418 13:12:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.418 13:12:41 -- common/autotest_common.sh@887 -- # return 0 00:08:27.418 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:27.418 13:12:41 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:27.418 13:12:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:27.679 13:12:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:27.679 13:12:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:27.679 13:12:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:27.679 13:12:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:27.679 13:12:42 -- common/autotest_common.sh@867 -- # local i 00:08:27.679 13:12:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.679 13:12:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.679 13:12:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:27.679 13:12:42 -- common/autotest_common.sh@871 -- # break 00:08:27.679 13:12:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.679 13:12:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.679 13:12:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.679 1+0 records in 00:08:27.679 1+0 records out 00:08:27.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127206 s, 3.2 MB/s 00:08:27.679 13:12:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.679 13:12:42 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.679 13:12:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.679 13:12:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.679 13:12:42 -- common/autotest_common.sh@887 -- # return 0 00:08:27.679 13:12:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:27.679 13:12:42 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:27.679 13:12:42 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd0", 00:08:27.939 "bdev_name": "Nvme0n1p1" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd1", 00:08:27.939 "bdev_name": "Nvme0n1p2" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd2", 00:08:27.939 "bdev_name": "Nvme1n1" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd3", 00:08:27.939 "bdev_name": "Nvme2n1" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd4", 00:08:27.939 "bdev_name": "Nvme2n2" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd5", 00:08:27.939 "bdev_name": "Nvme2n3" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd6", 00:08:27.939 "bdev_name": "Nvme3n1" 00:08:27.939 } 00:08:27.939 ]' 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd0", 00:08:27.939 "bdev_name": "Nvme0n1p1" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd1", 00:08:27.939 "bdev_name": "Nvme0n1p2" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd2", 00:08:27.939 "bdev_name": "Nvme1n1" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd3", 00:08:27.939 "bdev_name": "Nvme2n1" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd4", 00:08:27.939 "bdev_name": "Nvme2n2" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd5", 00:08:27.939 "bdev_name": "Nvme2n3" 00:08:27.939 }, 00:08:27.939 { 00:08:27.939 "nbd_device": "/dev/nbd6", 00:08:27.939 "bdev_name": "Nvme3n1" 00:08:27.939 } 00:08:27.939 ]' 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@51 -- # local i 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.939 13:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@41 -- # break 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@41 -- # break 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.199 13:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@41 -- # break 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.458 13:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@41 -- # break 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.717 13:12:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@41 -- # break 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.977 13:12:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@41 -- # break 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.238 13:12:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.239 13:12:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@41 -- # break 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.500 13:12:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@65 -- # true 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@65 -- # count=0 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@122 -- # count=0 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@127 -- # return 0 00:08:29.500 13:12:44 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@12 -- # local i 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.500 13:12:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:29.762 /dev/nbd0 00:08:29.762 13:12:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:29.762 13:12:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:29.762 13:12:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:29.762 13:12:44 -- common/autotest_common.sh@867 -- # local i 00:08:29.762 13:12:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:29.762 13:12:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:29.762 13:12:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:29.762 13:12:44 -- common/autotest_common.sh@871 -- # break 00:08:29.762 13:12:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:29.762 13:12:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:29.762 13:12:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.762 1+0 records in 00:08:29.762 1+0 records out 00:08:29.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107242 s, 3.8 MB/s 00:08:29.762 13:12:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.762 13:12:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:29.762 13:12:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.762 13:12:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:29.762 13:12:44 -- common/autotest_common.sh@887 -- # return 0 00:08:29.762 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.762 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.762 13:12:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:30.023 /dev/nbd1 00:08:30.023 13:12:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:30.023 13:12:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:30.023 13:12:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:30.023 13:12:44 -- common/autotest_common.sh@867 -- # local i 00:08:30.023 13:12:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:30.023 13:12:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:30.023 13:12:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:30.023 13:12:44 -- common/autotest_common.sh@871 -- # break 00:08:30.023 13:12:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:30.023 13:12:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:30.023 13:12:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.023 1+0 records in 00:08:30.023 1+0 records out 00:08:30.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937943 s, 4.4 MB/s 00:08:30.023 13:12:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.023 13:12:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:30.023 13:12:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.023 13:12:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:30.023 13:12:44 -- common/autotest_common.sh@887 -- # return 0 00:08:30.023 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.023 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:30.023 13:12:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:30.285 /dev/nbd10 00:08:30.285 13:12:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:30.285 13:12:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:30.285 13:12:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:30.285 13:12:44 -- common/autotest_common.sh@867 -- # local i 00:08:30.285 13:12:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:30.285 13:12:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:30.285 13:12:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:30.285 13:12:44 -- common/autotest_common.sh@871 -- # break 00:08:30.285 13:12:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:30.285 13:12:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:30.285 13:12:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.285 1+0 records in 00:08:30.285 1+0 records out 00:08:30.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109786 s, 3.7 MB/s 00:08:30.285 13:12:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.285 13:12:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:30.285 13:12:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.285 13:12:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:30.285 13:12:44 -- common/autotest_common.sh@887 -- # return 0 00:08:30.285 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.285 13:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:30.285 13:12:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:30.546 /dev/nbd11 00:08:30.546 13:12:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:30.546 13:12:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:30.546 13:12:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:30.546 13:12:45 -- common/autotest_common.sh@867 -- # local i 00:08:30.546 13:12:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:30.546 13:12:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:30.546 13:12:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:30.546 13:12:45 -- common/autotest_common.sh@871 -- # break 00:08:30.546 13:12:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:30.546 13:12:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:30.546 13:12:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.546 1+0 records in 00:08:30.546 1+0 records out 00:08:30.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986409 s, 4.2 MB/s 00:08:30.546 13:12:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.546 13:12:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:30.546 13:12:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.546 13:12:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:30.546 13:12:45 -- common/autotest_common.sh@887 -- # return 0 00:08:30.546 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.546 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:30.546 13:12:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:30.807 /dev/nbd12 00:08:30.807 13:12:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:30.807 13:12:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:30.807 13:12:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:30.807 13:12:45 -- common/autotest_common.sh@867 -- # local i 00:08:30.807 13:12:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:30.807 13:12:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:30.807 13:12:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:30.807 13:12:45 -- common/autotest_common.sh@871 -- # break 00:08:30.807 13:12:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:30.807 13:12:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:30.807 13:12:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.807 1+0 records in 00:08:30.807 1+0 records out 00:08:30.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117685 s, 3.5 MB/s 00:08:30.807 13:12:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.807 13:12:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:30.807 13:12:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.807 13:12:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:30.807 13:12:45 -- common/autotest_common.sh@887 -- # return 0 00:08:30.807 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.807 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:30.807 13:12:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:31.067 /dev/nbd13 00:08:31.067 13:12:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:31.067 13:12:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:31.067 13:12:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:31.067 13:12:45 -- common/autotest_common.sh@867 -- # local i 00:08:31.067 13:12:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:31.067 13:12:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:31.067 13:12:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:31.067 13:12:45 -- common/autotest_common.sh@871 -- # break 00:08:31.067 13:12:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:31.067 13:12:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:31.067 13:12:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.067 1+0 records in 00:08:31.067 1+0 records out 00:08:31.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124527 s, 3.3 MB/s 00:08:31.067 13:12:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.067 13:12:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:31.067 13:12:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.067 13:12:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:31.067 13:12:45 -- common/autotest_common.sh@887 -- # return 0 00:08:31.067 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.067 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:31.067 13:12:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:31.366 /dev/nbd14 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:31.366 13:12:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:31.366 13:12:45 -- common/autotest_common.sh@867 -- # local i 00:08:31.366 13:12:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:31.366 13:12:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:31.366 13:12:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:31.366 13:12:45 -- common/autotest_common.sh@871 -- # break 00:08:31.366 13:12:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:31.366 13:12:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:31.366 13:12:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.366 1+0 records in 00:08:31.366 1+0 records out 00:08:31.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094838 s, 4.3 MB/s 00:08:31.366 13:12:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.366 13:12:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:31.366 13:12:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.366 13:12:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:31.366 13:12:45 -- common/autotest_common.sh@887 -- # return 0 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.366 13:12:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd0", 00:08:31.628 "bdev_name": "Nvme0n1p1" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd1", 00:08:31.628 "bdev_name": "Nvme0n1p2" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd10", 00:08:31.628 "bdev_name": "Nvme1n1" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd11", 00:08:31.628 "bdev_name": "Nvme2n1" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd12", 00:08:31.628 "bdev_name": "Nvme2n2" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd13", 00:08:31.628 "bdev_name": "Nvme2n3" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd14", 00:08:31.628 "bdev_name": "Nvme3n1" 00:08:31.628 } 00:08:31.628 ]' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd0", 00:08:31.628 "bdev_name": "Nvme0n1p1" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd1", 00:08:31.628 "bdev_name": "Nvme0n1p2" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd10", 00:08:31.628 "bdev_name": "Nvme1n1" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd11", 00:08:31.628 "bdev_name": "Nvme2n1" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd12", 00:08:31.628 "bdev_name": "Nvme2n2" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd13", 00:08:31.628 "bdev_name": "Nvme2n3" 00:08:31.628 }, 00:08:31.628 { 00:08:31.628 "nbd_device": "/dev/nbd14", 00:08:31.628 "bdev_name": "Nvme3n1" 00:08:31.628 } 00:08:31.628 ]' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.628 /dev/nbd1 00:08:31.628 /dev/nbd10 00:08:31.628 /dev/nbd11 00:08:31.628 /dev/nbd12 00:08:31.628 /dev/nbd13 00:08:31.628 /dev/nbd14' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.628 /dev/nbd1 00:08:31.628 /dev/nbd10 00:08:31.628 /dev/nbd11 00:08:31.628 /dev/nbd12 00:08:31.628 /dev/nbd13 00:08:31.628 /dev/nbd14' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@65 -- # count=7 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@95 -- # count=7 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.628 13:12:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:31.628 256+0 records in 00:08:31.628 256+0 records out 00:08:31.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121096 s, 86.6 MB/s 00:08:31.628 13:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.628 13:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.890 256+0 records in 00:08:31.890 256+0 records out 00:08:31.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244399 s, 4.3 MB/s 00:08:31.890 13:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.890 13:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.890 256+0 records in 00:08:31.890 256+0 records out 00:08:31.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206964 s, 5.1 MB/s 00:08:31.890 13:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.890 13:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:32.152 256+0 records in 00:08:32.152 256+0 records out 00:08:32.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251863 s, 4.2 MB/s 00:08:32.152 13:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.152 13:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:32.413 256+0 records in 00:08:32.413 256+0 records out 00:08:32.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243038 s, 4.3 MB/s 00:08:32.413 13:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.413 13:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:32.674 256+0 records in 00:08:32.674 256+0 records out 00:08:32.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.213711 s, 4.9 MB/s 00:08:32.674 13:12:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.674 13:12:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:32.936 256+0 records in 00:08:32.936 256+0 records out 00:08:32.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.263851 s, 4.0 MB/s 00:08:32.936 13:12:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.936 13:12:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:33.198 256+0 records in 00:08:33.198 256+0 records out 00:08:33.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246766 s, 4.2 MB/s 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:33.198 13:12:47 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@51 -- # local i 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:33.459 13:12:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@41 -- # break 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.460 13:12:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@41 -- # break 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.721 13:12:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@41 -- # break 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.983 13:12:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@41 -- # break 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.244 13:12:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@41 -- # break 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.506 13:12:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@41 -- # break 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.506 13:12:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@41 -- # break 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.768 13:12:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@65 -- # true 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@65 -- # count=0 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@104 -- # count=0 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@109 -- # return 0 00:08:35.030 13:12:49 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:35.030 13:12:49 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:35.291 malloc_lvol_verify 00:08:35.291 13:12:49 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:35.550 af5648ad-444c-4885-8aca-705cf001f028 00:08:35.550 13:12:49 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:35.808 ae033c3b-7d6b-4257-8ccc-87b77109ea15 00:08:35.808 13:12:50 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:36.066 /dev/nbd0 00:08:36.066 13:12:50 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:36.066 mke2fs 1.47.0 (5-Feb-2023) 00:08:36.066 Discarding device blocks: 0/4096 done 00:08:36.066 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:36.066 00:08:36.066 Allocating group tables: 0/1 done 00:08:36.066 Writing inode tables: 0/1 done 00:08:36.066 Creating journal (1024 blocks): done 00:08:36.067 Writing superblocks and filesystem accounting information: 0/1 done 00:08:36.067 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@51 -- # local i 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@41 -- # break 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:36.067 13:12:50 -- bdev/nbd_common.sh@147 -- # return 0 00:08:36.067 13:12:50 -- bdev/blockdev.sh@324 -- # killprocess 61896 00:08:36.067 13:12:50 -- common/autotest_common.sh@936 -- # '[' -z 61896 ']' 00:08:36.067 13:12:50 -- common/autotest_common.sh@940 -- # kill -0 61896 00:08:36.067 13:12:50 -- common/autotest_common.sh@941 -- # uname 00:08:36.325 13:12:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.325 13:12:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61896 00:08:36.325 killing process with pid 61896 00:08:36.325 13:12:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:36.325 13:12:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:36.325 13:12:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61896' 00:08:36.325 13:12:50 -- common/autotest_common.sh@955 -- # kill 61896 00:08:36.325 13:12:50 -- common/autotest_common.sh@960 -- # wait 61896 00:08:36.894 ************************************ 00:08:36.894 END TEST bdev_nbd 00:08:36.894 ************************************ 00:08:36.894 13:12:51 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:36.894 00:08:36.894 real 0m12.584s 00:08:36.894 user 0m16.932s 00:08:36.894 sys 0m4.071s 00:08:36.894 13:12:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.894 13:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:36.894 13:12:51 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:36.894 13:12:51 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:36.894 13:12:51 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:36.894 skipping fio tests on NVMe due to multi-ns failures. 00:08:36.894 13:12:51 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:36.894 13:12:51 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:36.894 13:12:51 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:36.894 13:12:51 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:36.894 13:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.895 13:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:36.895 ************************************ 00:08:36.895 START TEST bdev_verify 00:08:36.895 ************************************ 00:08:36.895 13:12:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:36.895 [2024-12-16 13:12:51.459957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.895 [2024-12-16 13:12:51.460081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62332 ] 00:08:37.154 [2024-12-16 13:12:51.618670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.412 [2024-12-16 13:12:51.771257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.412 [2024-12-16 13:12:51.771353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.983 Running I/O for 5 seconds... 00:08:43.278 00:08:43.278 Latency(us) 00:08:43.278 [2024-12-16T13:12:57.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0x5e800 00:08:43.278 Nvme0n1p1 : 5.06 1878.65 7.34 0.00 0.00 67873.13 13107.20 92355.35 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x5e800 length 0x5e800 00:08:43.278 Nvme0n1p1 : 5.06 1975.61 7.72 0.00 0.00 64586.00 11393.18 74206.92 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0x5e7ff 00:08:43.278 Nvme0n1p2 : 5.06 1878.12 7.34 0.00 0.00 67818.42 13611.32 90742.15 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:43.278 Nvme0n1p2 : 5.07 1975.04 7.71 0.00 0.00 64540.45 11040.30 71787.13 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0xa0000 00:08:43.278 Nvme1n1 : 5.07 1884.49 7.36 0.00 0.00 67372.88 3327.21 73803.62 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0xa0000 length 0xa0000 00:08:43.278 Nvme1n1 : 5.07 1974.30 7.71 0.00 0.00 64444.19 10284.11 63721.16 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0x80000 00:08:43.278 Nvme2n1 : 5.07 1890.85 7.39 0.00 0.00 67070.58 2495.41 71383.83 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x80000 length 0x80000 00:08:43.278 Nvme2n1 : 5.07 1973.37 7.71 0.00 0.00 64342.03 9981.64 62107.96 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0x80000 00:08:43.278 Nvme2n2 : 5.08 1889.98 7.38 0.00 0.00 67033.50 4159.02 71787.13 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x80000 length 0x80000 00:08:43.278 Nvme2n2 : 5.07 1972.63 7.71 0.00 0.00 64303.60 9779.99 62107.96 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0x80000 00:08:43.278 Nvme2n3 : 5.08 1889.03 7.38 0.00 0.00 66989.70 6024.27 70173.93 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x80000 length 0x80000 00:08:43.278 Nvme2n3 : 5.07 1971.71 7.70 0.00 0.00 64263.56 11544.42 61301.37 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x0 length 0x20000 00:08:43.278 Nvme3n1 : 5.08 1888.10 7.38 0.00 0.00 66951.63 7813.91 72997.02 00:08:43.278 [2024-12-16T13:12:57.852Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:43.278 Verification LBA range: start 0x20000 length 0x20000 00:08:43.278 Nvme3n1 : 5.08 1970.69 7.70 0.00 0.00 64226.68 13107.20 60091.47 00:08:43.278 [2024-12-16T13:12:57.852Z] =================================================================================================================== 00:08:43.278 [2024-12-16T13:12:57.852Z] Total : 27012.57 105.52 0.00 0.00 65810.24 2495.41 92355.35 00:08:46.581 00:08:46.581 real 0m9.676s 00:08:46.581 user 0m18.120s 00:08:46.581 sys 0m0.290s 00:08:46.581 ************************************ 00:08:46.581 END TEST bdev_verify 00:08:46.581 ************************************ 00:08:46.581 13:13:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.581 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:46.581 13:13:01 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:46.581 13:13:01 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:46.581 13:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.581 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:46.581 ************************************ 00:08:46.581 START TEST bdev_verify_big_io 00:08:46.581 ************************************ 00:08:46.581 13:13:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:46.842 [2024-12-16 13:13:01.203062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.842 [2024-12-16 13:13:01.203176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62454 ] 00:08:46.842 [2024-12-16 13:13:01.351258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:47.102 [2024-12-16 13:13:01.528405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.102 [2024-12-16 13:13:01.528481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.673 Running I/O for 5 seconds... 00:08:54.246 00:08:54.246 Latency(us) 00:08:54.246 [2024-12-16T13:13:08.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.246 [2024-12-16T13:13:08.820Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.246 Verification LBA range: start 0x0 length 0x5e80 00:08:54.246 Nvme0n1p1 : 5.41 224.25 14.02 0.00 0.00 560719.41 49807.36 793691.37 00:08:54.246 [2024-12-16T13:13:08.820Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.246 Verification LBA range: start 0x5e80 length 0x5e80 00:08:54.246 Nvme0n1p1 : 5.50 205.23 12.83 0.00 0.00 609301.19 45976.02 832408.02 00:08:54.246 [2024-12-16T13:13:08.820Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.246 Verification LBA range: start 0x0 length 0x5e7f 00:08:54.246 Nvme0n1p2 : 5.42 224.18 14.01 0.00 0.00 553602.72 50210.66 725937.23 00:08:54.246 [2024-12-16T13:13:08.820Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:54.247 Nvme0n1p2 : 5.50 205.17 12.82 0.00 0.00 598881.52 46580.97 751748.33 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x0 length 0xa000 00:08:54.247 Nvme1n1 : 5.42 224.13 14.01 0.00 0.00 544822.25 50815.61 677541.42 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0xa000 length 0xa000 00:08:54.247 Nvme1n1 : 5.52 213.01 13.31 0.00 0.00 573260.60 11040.30 680767.80 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x0 length 0x8000 00:08:54.247 Nvme2n1 : 5.46 229.54 14.35 0.00 0.00 523455.49 40329.85 600108.11 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x8000 length 0x8000 00:08:54.247 Nvme2n1 : 5.52 212.94 13.31 0.00 0.00 564065.13 11846.89 648503.93 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x0 length 0x8000 00:08:54.247 Nvme2n2 : 5.46 229.48 14.34 0.00 0.00 515232.61 40733.14 577523.40 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x8000 length 0x8000 00:08:54.247 Nvme2n2 : 5.52 212.87 13.30 0.00 0.00 555250.75 12653.49 600108.11 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x0 length 0x8000 00:08:54.247 Nvme2n3 : 5.50 245.18 15.32 0.00 0.00 477697.57 21677.29 583976.17 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x8000 length 0x8000 00:08:54.247 Nvme2n3 : 5.53 219.53 13.72 0.00 0.00 531356.60 8015.56 858219.13 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x0 length 0x2000 00:08:54.247 Nvme3n1 : 5.52 267.70 16.73 0.00 0.00 432703.95 1159.48 642051.15 00:08:54.247 [2024-12-16T13:13:08.821Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.247 Verification LBA range: start 0x2000 length 0x2000 00:08:54.247 Nvme3n1 : 5.54 234.11 14.63 0.00 0.00 491394.71 4285.05 877577.45 00:08:54.247 [2024-12-16T13:13:08.821Z] =================================================================================================================== 00:08:54.247 [2024-12-16T13:13:08.821Z] Total : 3147.33 196.71 0.00 0.00 534764.57 1159.48 877577.45 00:08:55.624 00:08:55.624 real 0m8.645s 00:08:55.624 user 0m16.175s 00:08:55.624 sys 0m0.243s 00:08:55.624 13:13:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.624 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.624 ************************************ 00:08:55.624 END TEST bdev_verify_big_io 00:08:55.624 ************************************ 00:08:55.624 13:13:09 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.624 13:13:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:55.624 13:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.624 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.624 ************************************ 00:08:55.624 START TEST bdev_write_zeroes 00:08:55.624 ************************************ 00:08:55.624 13:13:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.624 [2024-12-16 13:13:09.900451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:55.624 [2024-12-16 13:13:09.900560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62569 ] 00:08:55.624 [2024-12-16 13:13:10.048241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.885 [2024-12-16 13:13:10.230341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.452 Running I/O for 1 seconds... 00:08:57.387 00:08:57.387 Latency(us) 00:08:57.387 [2024-12-16T13:13:11.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme0n1p1 : 1.01 9653.49 37.71 0.00 0.00 13217.88 6175.51 25206.15 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme0n1p2 : 1.02 9641.73 37.66 0.00 0.00 13212.03 6074.68 26819.35 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme1n1 : 1.02 9630.89 37.62 0.00 0.00 13183.04 9275.86 22181.42 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme2n1 : 1.02 9659.28 37.73 0.00 0.00 13121.20 8519.68 21475.64 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme2n2 : 1.02 9647.70 37.69 0.00 0.00 13086.19 8922.98 20467.40 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme2n3 : 1.02 9689.68 37.85 0.00 0.00 13016.01 4738.76 20669.05 00:08:57.387 [2024-12-16T13:13:11.961Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:57.387 Nvme3n1 : 1.02 9678.87 37.81 0.00 0.00 13010.27 5016.02 20265.75 00:08:57.387 [2024-12-16T13:13:11.961Z] =================================================================================================================== 00:08:57.387 [2024-12-16T13:13:11.961Z] Total : 67601.64 264.07 0.00 0.00 13120.51 4738.76 26819.35 00:08:58.331 00:08:58.331 real 0m2.870s 00:08:58.331 user 0m2.563s 00:08:58.331 sys 0m0.188s 00:08:58.331 13:13:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.331 13:13:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.331 ************************************ 00:08:58.331 END TEST bdev_write_zeroes 00:08:58.331 ************************************ 00:08:58.331 13:13:12 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:58.331 13:13:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:58.331 13:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.331 13:13:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.331 ************************************ 00:08:58.331 START TEST bdev_json_nonenclosed 00:08:58.331 ************************************ 00:08:58.331 13:13:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:58.331 [2024-12-16 13:13:12.852041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.331 [2024-12-16 13:13:12.852182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62622 ] 00:08:58.640 [2024-12-16 13:13:13.005343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.925 [2024-12-16 13:13:13.226442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.925 [2024-12-16 13:13:13.226645] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:58.925 [2024-12-16 13:13:13.226666] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.186 00:08:59.186 real 0m0.751s 00:08:59.186 user 0m0.530s 00:08:59.186 sys 0m0.113s 00:08:59.186 13:13:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.186 ************************************ 00:08:59.186 END TEST bdev_json_nonenclosed 00:08:59.186 ************************************ 00:08:59.186 13:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:59.186 13:13:13 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:59.186 13:13:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:59.186 13:13:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.186 13:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:59.186 ************************************ 00:08:59.186 START TEST bdev_json_nonarray 00:08:59.186 ************************************ 00:08:59.186 13:13:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:59.186 [2024-12-16 13:13:13.659316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.186 [2024-12-16 13:13:13.659430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62647 ] 00:08:59.447 [2024-12-16 13:13:13.806147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.708 [2024-12-16 13:13:14.019307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.708 [2024-12-16 13:13:14.019510] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:59.708 [2024-12-16 13:13:14.019531] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.968 00:08:59.968 real 0m0.723s 00:08:59.968 user 0m0.515s 00:08:59.968 sys 0m0.101s 00:08:59.968 13:13:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.968 ************************************ 00:08:59.968 END TEST bdev_json_nonarray 00:08:59.968 ************************************ 00:08:59.968 13:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:59.968 13:13:14 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:08:59.968 13:13:14 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:08:59.968 13:13:14 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:59.968 13:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.968 13:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.968 13:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:59.968 ************************************ 00:08:59.968 START TEST bdev_gpt_uuid 00:08:59.968 ************************************ 00:08:59.968 13:13:14 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:08:59.968 13:13:14 -- bdev/blockdev.sh@612 -- # local bdev 00:08:59.968 13:13:14 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:08:59.968 13:13:14 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62678 00:08:59.968 13:13:14 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:59.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.968 13:13:14 -- bdev/blockdev.sh@47 -- # waitforlisten 62678 00:08:59.968 13:13:14 -- common/autotest_common.sh@829 -- # '[' -z 62678 ']' 00:08:59.968 13:13:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.968 13:13:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.968 13:13:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.968 13:13:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.968 13:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:59.968 13:13:14 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:59.968 [2024-12-16 13:13:14.489564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.968 [2024-12-16 13:13:14.489752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62678 ] 00:09:00.230 [2024-12-16 13:13:14.648371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.490 [2024-12-16 13:13:14.840285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.490 [2024-12-16 13:13:14.840512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.433 13:13:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.433 13:13:15 -- common/autotest_common.sh@862 -- # return 0 00:09:01.433 13:13:15 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:01.433 13:13:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.433 13:13:15 -- common/autotest_common.sh@10 -- # set +x 00:09:02.005 Some configs were skipped because the RPC state that can call them passed over. 00:09:02.005 13:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:02.005 13:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.005 13:13:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.005 13:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:02.005 13:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.005 13:13:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.005 13:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:02.005 { 00:09:02.005 "name": "Nvme0n1p1", 00:09:02.005 "aliases": [ 00:09:02.005 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:02.005 ], 00:09:02.005 "product_name": "GPT Disk", 00:09:02.005 "block_size": 4096, 00:09:02.005 "num_blocks": 774144, 00:09:02.005 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:02.005 "md_size": 64, 00:09:02.005 "md_interleave": false, 00:09:02.005 "dif_type": 0, 00:09:02.005 "assigned_rate_limits": { 00:09:02.005 "rw_ios_per_sec": 0, 00:09:02.005 "rw_mbytes_per_sec": 0, 00:09:02.005 "r_mbytes_per_sec": 0, 00:09:02.005 "w_mbytes_per_sec": 0 00:09:02.005 }, 00:09:02.005 "claimed": false, 00:09:02.005 "zoned": false, 00:09:02.005 "supported_io_types": { 00:09:02.005 "read": true, 00:09:02.005 "write": true, 00:09:02.005 "unmap": true, 00:09:02.005 "write_zeroes": true, 00:09:02.005 "flush": true, 00:09:02.005 "reset": true, 00:09:02.005 "compare": true, 00:09:02.005 "compare_and_write": false, 00:09:02.005 "abort": true, 00:09:02.005 "nvme_admin": false, 00:09:02.005 "nvme_io": false 00:09:02.005 }, 00:09:02.005 "driver_specific": { 00:09:02.005 "gpt": { 00:09:02.005 "base_bdev": "Nvme0n1", 00:09:02.005 "offset_blocks": 256, 00:09:02.005 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:02.005 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:02.005 "partition_name": "SPDK_TEST_first" 00:09:02.005 } 00:09:02.005 } 00:09:02.005 } 00:09:02.005 ]' 00:09:02.005 13:13:16 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:02.005 13:13:16 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:02.005 13:13:16 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:02.005 13:13:16 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:02.005 13:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.005 13:13:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.005 13:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:02.005 { 00:09:02.005 "name": "Nvme0n1p2", 00:09:02.005 "aliases": [ 00:09:02.005 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:02.005 ], 00:09:02.005 "product_name": "GPT Disk", 00:09:02.005 "block_size": 4096, 00:09:02.005 "num_blocks": 774143, 00:09:02.005 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:02.005 "md_size": 64, 00:09:02.005 "md_interleave": false, 00:09:02.005 "dif_type": 0, 00:09:02.005 "assigned_rate_limits": { 00:09:02.005 "rw_ios_per_sec": 0, 00:09:02.005 "rw_mbytes_per_sec": 0, 00:09:02.005 "r_mbytes_per_sec": 0, 00:09:02.005 "w_mbytes_per_sec": 0 00:09:02.005 }, 00:09:02.005 "claimed": false, 00:09:02.005 "zoned": false, 00:09:02.005 "supported_io_types": { 00:09:02.005 "read": true, 00:09:02.005 "write": true, 00:09:02.005 "unmap": true, 00:09:02.005 "write_zeroes": true, 00:09:02.005 "flush": true, 00:09:02.005 "reset": true, 00:09:02.005 "compare": true, 00:09:02.005 "compare_and_write": false, 00:09:02.005 "abort": true, 00:09:02.005 "nvme_admin": false, 00:09:02.005 "nvme_io": false 00:09:02.005 }, 00:09:02.005 "driver_specific": { 00:09:02.005 "gpt": { 00:09:02.005 "base_bdev": "Nvme0n1", 00:09:02.005 "offset_blocks": 774400, 00:09:02.005 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:02.005 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:02.005 "partition_name": "SPDK_TEST_second" 00:09:02.005 } 00:09:02.005 } 00:09:02.005 } 00:09:02.005 ]' 00:09:02.005 13:13:16 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:02.005 13:13:16 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:02.005 13:13:16 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:02.005 13:13:16 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:02.005 13:13:16 -- bdev/blockdev.sh@629 -- # killprocess 62678 00:09:02.005 13:13:16 -- common/autotest_common.sh@936 -- # '[' -z 62678 ']' 00:09:02.005 13:13:16 -- common/autotest_common.sh@940 -- # kill -0 62678 00:09:02.005 13:13:16 -- common/autotest_common.sh@941 -- # uname 00:09:02.005 13:13:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:02.005 13:13:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62678 00:09:02.005 13:13:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:02.005 killing process with pid 62678 00:09:02.005 13:13:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:02.005 13:13:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62678' 00:09:02.005 13:13:16 -- common/autotest_common.sh@955 -- # kill 62678 00:09:02.005 13:13:16 -- common/autotest_common.sh@960 -- # wait 62678 00:09:03.916 00:09:03.916 real 0m3.606s 00:09:03.916 user 0m3.799s 00:09:03.916 sys 0m0.496s 00:09:03.916 13:13:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.916 ************************************ 00:09:03.916 END TEST bdev_gpt_uuid 00:09:03.916 ************************************ 00:09:03.916 13:13:18 -- common/autotest_common.sh@10 -- # set +x 00:09:03.916 13:13:18 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:03.916 13:13:18 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:03.916 13:13:18 -- bdev/blockdev.sh@809 -- # cleanup 00:09:03.916 13:13:18 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:03.916 13:13:18 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:03.916 13:13:18 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:03.916 13:13:18 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:03.916 13:13:18 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:03.916 13:13:18 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:03.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.178 Waiting for block devices as requested 00:09:04.178 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.178 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.437 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.437 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.724 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:09.724 13:13:23 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:09.724 13:13:23 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:09.724 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:09.724 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:09.724 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:09.724 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:09.724 13:13:24 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:09.724 00:09:09.724 real 1m1.786s 00:09:09.724 user 1m19.052s 00:09:09.724 sys 0m8.703s 00:09:09.724 13:13:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.724 13:13:24 -- common/autotest_common.sh@10 -- # set +x 00:09:09.724 ************************************ 00:09:09.724 END TEST blockdev_nvme_gpt 00:09:09.724 ************************************ 00:09:09.986 13:13:24 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:09.986 13:13:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:09.986 13:13:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.986 13:13:24 -- common/autotest_common.sh@10 -- # set +x 00:09:09.986 ************************************ 00:09:09.986 START TEST nvme 00:09:09.986 ************************************ 00:09:09.986 13:13:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:09.986 * Looking for test storage... 00:09:09.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:09.986 13:13:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:09.986 13:13:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:09.986 13:13:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:09.986 13:13:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:09.986 13:13:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:09.986 13:13:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:09.986 13:13:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:09.986 13:13:24 -- scripts/common.sh@335 -- # IFS=.-: 00:09:09.986 13:13:24 -- scripts/common.sh@335 -- # read -ra ver1 00:09:09.986 13:13:24 -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.986 13:13:24 -- scripts/common.sh@336 -- # read -ra ver2 00:09:09.986 13:13:24 -- scripts/common.sh@337 -- # local 'op=<' 00:09:09.986 13:13:24 -- scripts/common.sh@339 -- # ver1_l=2 00:09:09.986 13:13:24 -- scripts/common.sh@340 -- # ver2_l=1 00:09:09.986 13:13:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:09.986 13:13:24 -- scripts/common.sh@343 -- # case "$op" in 00:09:09.986 13:13:24 -- scripts/common.sh@344 -- # : 1 00:09:09.986 13:13:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:09.986 13:13:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.986 13:13:24 -- scripts/common.sh@364 -- # decimal 1 00:09:09.986 13:13:24 -- scripts/common.sh@352 -- # local d=1 00:09:09.986 13:13:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.986 13:13:24 -- scripts/common.sh@354 -- # echo 1 00:09:09.986 13:13:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:09.986 13:13:24 -- scripts/common.sh@365 -- # decimal 2 00:09:09.986 13:13:24 -- scripts/common.sh@352 -- # local d=2 00:09:09.986 13:13:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.986 13:13:24 -- scripts/common.sh@354 -- # echo 2 00:09:09.986 13:13:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:09.986 13:13:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:09.986 13:13:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:09.986 13:13:24 -- scripts/common.sh@367 -- # return 0 00:09:09.986 13:13:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.986 13:13:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.986 --rc genhtml_branch_coverage=1 00:09:09.986 --rc genhtml_function_coverage=1 00:09:09.986 --rc genhtml_legend=1 00:09:09.986 --rc geninfo_all_blocks=1 00:09:09.986 --rc geninfo_unexecuted_blocks=1 00:09:09.986 00:09:09.986 ' 00:09:09.986 13:13:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.986 --rc genhtml_branch_coverage=1 00:09:09.986 --rc genhtml_function_coverage=1 00:09:09.986 --rc genhtml_legend=1 00:09:09.986 --rc geninfo_all_blocks=1 00:09:09.986 --rc geninfo_unexecuted_blocks=1 00:09:09.986 00:09:09.986 ' 00:09:09.986 13:13:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.986 --rc genhtml_branch_coverage=1 00:09:09.986 --rc genhtml_function_coverage=1 00:09:09.986 --rc genhtml_legend=1 00:09:09.986 --rc geninfo_all_blocks=1 00:09:09.986 --rc geninfo_unexecuted_blocks=1 00:09:09.986 00:09:09.986 ' 00:09:09.986 13:13:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.986 --rc genhtml_branch_coverage=1 00:09:09.986 --rc genhtml_function_coverage=1 00:09:09.986 --rc genhtml_legend=1 00:09:09.986 --rc geninfo_all_blocks=1 00:09:09.986 --rc geninfo_unexecuted_blocks=1 00:09:09.986 00:09:09.986 ' 00:09:09.986 13:13:24 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:10.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.931 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.931 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.192 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.192 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.192 13:13:25 -- nvme/nvme.sh@79 -- # uname 00:09:11.192 13:13:25 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:11.192 13:13:25 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:11.192 13:13:25 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:11.192 13:13:25 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:11.192 13:13:25 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:11.192 13:13:25 -- common/autotest_common.sh@1055 -- # echo 0 00:09:11.192 Waiting for stub to ready for secondary processes... 00:09:11.192 13:13:25 -- common/autotest_common.sh@1057 -- # stubpid=63345 00:09:11.192 13:13:25 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:11.192 13:13:25 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:11.192 13:13:25 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63345 ]] 00:09:11.192 13:13:25 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:11.192 13:13:25 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:11.192 [2024-12-16 13:13:25.646787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.192 [2024-12-16 13:13:25.646889] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.135 [2024-12-16 13:13:26.395127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.135 [2024-12-16 13:13:26.563372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.135 [2024-12-16 13:13:26.563643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.135 [2024-12-16 13:13:26.563678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.135 [2024-12-16 13:13:26.582939] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:12.135 [2024-12-16 13:13:26.596795] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:12.135 [2024-12-16 13:13:26.597102] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:12.135 [2024-12-16 13:13:26.611530] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:12.135 [2024-12-16 13:13:26.611686] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:12.135 [2024-12-16 13:13:26.611791] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:12.135 13:13:26 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:12.135 13:13:26 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63345 ]] 00:09:12.135 13:13:26 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:12.135 [2024-12-16 13:13:26.618656] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:12.135 [2024-12-16 13:13:26.618781] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:12.135 [2024-12-16 13:13:26.618860] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:12.135 [2024-12-16 13:13:26.626141] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:12.135 [2024-12-16 13:13:26.626283] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:12.135 [2024-12-16 13:13:26.626511] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:12.135 [2024-12-16 13:13:26.626589] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:12.135 [2024-12-16 13:13:26.626696] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:13.077 done. 00:09:13.077 13:13:27 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:13.077 13:13:27 -- common/autotest_common.sh@1064 -- # echo done. 00:09:13.077 13:13:27 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:13.077 13:13:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:13.077 13:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.077 13:13:27 -- common/autotest_common.sh@10 -- # set +x 00:09:13.077 ************************************ 00:09:13.077 START TEST nvme_reset 00:09:13.077 ************************************ 00:09:13.077 13:13:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:13.338 Initializing NVMe Controllers 00:09:13.338 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:13.338 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:13.338 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:13.338 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:13.338 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:13.338 00:09:13.338 real 0m0.223s 00:09:13.338 user 0m0.071s 00:09:13.338 sys 0m0.101s 00:09:13.338 13:13:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.338 ************************************ 00:09:13.338 END TEST nvme_reset 00:09:13.338 ************************************ 00:09:13.338 13:13:27 -- common/autotest_common.sh@10 -- # set +x 00:09:13.338 13:13:27 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:13.338 13:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.338 13:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.338 13:13:27 -- common/autotest_common.sh@10 -- # set +x 00:09:13.602 ************************************ 00:09:13.602 START TEST nvme_identify 00:09:13.602 ************************************ 00:09:13.602 13:13:27 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:13.602 13:13:27 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:13.602 13:13:27 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:13.602 13:13:27 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:13.602 13:13:27 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:13.602 13:13:27 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:13.602 13:13:27 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:13.602 13:13:27 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:13.602 13:13:27 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:13.602 13:13:27 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:13.602 13:13:27 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:13.602 13:13:27 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:13.602 13:13:27 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:13.602 [2024-12-16 13:13:28.142599] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63387 terminated unexpected 00:09:13.602 ===================================================== 00:09:13.602 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:13.602 ===================================================== 00:09:13.602 Controller Capabilities/Features 00:09:13.602 ================================ 00:09:13.602 Vendor ID: 1b36 00:09:13.602 Subsystem Vendor ID: 1af4 00:09:13.602 Serial Number: 12343 00:09:13.602 Model Number: QEMU NVMe Ctrl 00:09:13.602 Firmware Version: 8.0.0 00:09:13.602 Recommended Arb Burst: 6 00:09:13.602 IEEE OUI Identifier: 00 54 52 00:09:13.602 Multi-path I/O 00:09:13.602 May have multiple subsystem ports: No 00:09:13.602 May have multiple controllers: Yes 00:09:13.602 Associated with SR-IOV VF: No 00:09:13.602 Max Data Transfer Size: 524288 00:09:13.602 Max Number of Namespaces: 256 00:09:13.602 Max Number of I/O Queues: 64 00:09:13.602 NVMe Specification Version (VS): 1.4 00:09:13.602 NVMe Specification Version (Identify): 1.4 00:09:13.602 Maximum Queue Entries: 2048 00:09:13.602 Contiguous Queues Required: Yes 00:09:13.602 Arbitration Mechanisms Supported 00:09:13.602 Weighted Round Robin: Not Supported 00:09:13.602 Vendor Specific: Not Supported 00:09:13.602 Reset Timeout: 7500 ms 00:09:13.602 Doorbell Stride: 4 bytes 00:09:13.602 NVM Subsystem Reset: Not Supported 00:09:13.602 Command Sets Supported 00:09:13.602 NVM Command Set: Supported 00:09:13.602 Boot Partition: Not Supported 00:09:13.602 Memory Page Size Minimum: 4096 bytes 00:09:13.602 Memory Page Size Maximum: 65536 bytes 00:09:13.602 Persistent Memory Region: Not Supported 00:09:13.602 Optional Asynchronous Events Supported 00:09:13.602 Namespace Attribute Notices: Supported 00:09:13.602 Firmware Activation Notices: Not Supported 00:09:13.602 ANA Change Notices: Not Supported 00:09:13.602 PLE Aggregate Log Change Notices: Not Supported 00:09:13.602 LBA Status Info Alert Notices: Not Supported 00:09:13.602 EGE Aggregate Log Change Notices: Not Supported 00:09:13.602 Normal NVM Subsystem Shutdown event: Not Supported 00:09:13.602 Zone Descriptor Change Notices: Not Supported 00:09:13.602 Discovery Log Change Notices: Not Supported 00:09:13.602 Controller Attributes 00:09:13.602 128-bit Host Identifier: Not Supported 00:09:13.602 Non-Operational Permissive Mode: Not Supported 00:09:13.602 NVM Sets: Not Supported 00:09:13.602 Read Recovery Levels: Not Supported 00:09:13.602 Endurance Groups: Supported 00:09:13.602 Predictable Latency Mode: Not Supported 00:09:13.602 Traffic Based Keep ALive: Not Supported 00:09:13.602 Namespace Granularity: Not Supported 00:09:13.602 SQ Associations: Not Supported 00:09:13.602 UUID List: Not Supported 00:09:13.602 Multi-Domain Subsystem: Not Supported 00:09:13.602 Fixed Capacity Management: Not Supported 00:09:13.602 Variable Capacity Management: Not Supported 00:09:13.602 Delete Endurance Group: Not Supported 00:09:13.602 Delete NVM Set: Not Supported 00:09:13.602 Extended LBA Formats Supported: Supported 00:09:13.602 Flexible Data Placement Supported: Supported 00:09:13.602 00:09:13.602 Controller Memory Buffer Support 00:09:13.602 ================================ 00:09:13.602 Supported: No 00:09:13.602 00:09:13.602 Persistent Memory Region Support 00:09:13.602 ================================ 00:09:13.602 Supported: No 00:09:13.602 00:09:13.602 Admin Command Set Attributes 00:09:13.603 ============================ 00:09:13.603 Security Send/Receive: Not Supported 00:09:13.603 Format NVM: Supported 00:09:13.603 Firmware Activate/Download: Not Supported 00:09:13.603 Namespace Management: Supported 00:09:13.603 Device Self-Test: Not Supported 00:09:13.603 Directives: Supported 00:09:13.603 NVMe-MI: Not Supported 00:09:13.603 Virtualization Management: Not Supported 00:09:13.603 Doorbell Buffer Config: Supported 00:09:13.603 Get LBA Status Capability: Not Supported 00:09:13.603 Command & Feature Lockdown Capability: Not Supported 00:09:13.603 Abort Command Limit: 4 00:09:13.603 Async Event Request Limit: 4 00:09:13.603 Number of Firmware Slots: N/A 00:09:13.603 Firmware Slot 1 Read-Only: N/A 00:09:13.603 Firmware Activation Without Reset: N/A 00:09:13.603 Multiple Update Detection Support: N/A 00:09:13.603 Firmware Update Granularity: No Information Provided 00:09:13.603 Per-Namespace SMART Log: Yes 00:09:13.603 Asymmetric Namespace Access Log Page: Not Supported 00:09:13.603 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:13.603 Command Effects Log Page: Supported 00:09:13.603 Get Log Page Extended Data: Supported 00:09:13.603 Telemetry Log Pages: Not Supported 00:09:13.603 Persistent Event Log Pages: Not Supported 00:09:13.603 Supported Log Pages Log Page: May Support 00:09:13.603 Commands Supported & Effects Log Page: Not Supported 00:09:13.603 Feature Identifiers & Effects Log Page:May Support 00:09:13.603 NVMe-MI Commands & Effects Log Page: May Support 00:09:13.603 Data Area 4 for Telemetry Log: Not Supported 00:09:13.603 Error Log Page Entries Supported: 1 00:09:13.603 Keep Alive: Not Supported 00:09:13.603 00:09:13.603 NVM Command Set Attributes 00:09:13.603 ========================== 00:09:13.603 Submission Queue Entry Size 00:09:13.603 Max: 64 00:09:13.603 Min: 64 00:09:13.603 Completion Queue Entry Size 00:09:13.603 Max: 16 00:09:13.603 Min: 16 00:09:13.603 Number of Namespaces: 256 00:09:13.603 Compare Command: Supported 00:09:13.603 Write Uncorrectable Command: Not Supported 00:09:13.603 Dataset Management Command: Supported 00:09:13.603 Write Zeroes Command: Supported 00:09:13.603 Set Features Save Field: Supported 00:09:13.603 Reservations: Not Supported 00:09:13.603 Timestamp: Supported 00:09:13.603 Copy: Supported 00:09:13.603 Volatile Write Cache: Present 00:09:13.603 Atomic Write Unit (Normal): 1 00:09:13.603 Atomic Write Unit (PFail): 1 00:09:13.603 Atomic Compare & Write Unit: 1 00:09:13.603 Fused Compare & Write: Not Supported 00:09:13.603 Scatter-Gather List 00:09:13.603 SGL Command Set: Supported 00:09:13.603 SGL Keyed: Not Supported 00:09:13.603 SGL Bit Bucket Descriptor: Not Supported 00:09:13.603 SGL Metadata Pointer: Not Supported 00:09:13.603 Oversized SGL: Not Supported 00:09:13.603 SGL Metadata Address: Not Supported 00:09:13.603 SGL Offset: Not Supported 00:09:13.603 Transport SGL Data Block: Not Supported 00:09:13.603 Replay Protected Memory Block: Not Supported 00:09:13.603 00:09:13.603 Firmware Slot Information 00:09:13.603 ========================= 00:09:13.603 Active slot: 1 00:09:13.603 Slot 1 Firmware Revision: 1.0 00:09:13.603 00:09:13.603 00:09:13.603 Commands Supported and Effects 00:09:13.603 ============================== 00:09:13.603 Admin Commands 00:09:13.603 -------------- 00:09:13.603 Delete I/O Submission Queue (00h): Supported 00:09:13.603 Create I/O Submission Queue (01h): Supported 00:09:13.603 Get Log Page (02h): Supported 00:09:13.603 Delete I/O Completion Queue (04h): Supported 00:09:13.603 Create I/O Completion Queue (05h): Supported 00:09:13.603 Identify (06h): Supported 00:09:13.603 Abort (08h): Supported 00:09:13.603 Set Features (09h): Supported 00:09:13.603 Get Features (0Ah): Supported 00:09:13.603 Asynchronous Event Request (0Ch): Supported 00:09:13.603 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:13.603 Directive Send (19h): Supported 00:09:13.603 Directive Receive (1Ah): Supported 00:09:13.603 Virtualization Management (1Ch): Supported 00:09:13.603 Doorbell Buffer Config (7Ch): Supported 00:09:13.603 Format NVM (80h): Supported LBA-Change 00:09:13.603 I/O Commands 00:09:13.603 ------------ 00:09:13.603 Flush (00h): Supported LBA-Change 00:09:13.603 Write (01h): Supported LBA-Change 00:09:13.603 Read (02h): Supported 00:09:13.603 Compare (05h): Supported 00:09:13.603 Write Zeroes (08h): Supported LBA-Change 00:09:13.603 Dataset Management (09h): Supported LBA-Change 00:09:13.603 Unknown (0Ch): Supported 00:09:13.603 Unknown (12h): Supported 00:09:13.603 Copy (19h): Supported LBA-Change 00:09:13.603 Unknown (1Dh): Supported LBA-Change 00:09:13.603 00:09:13.603 Error Log 00:09:13.603 ========= 00:09:13.603 00:09:13.603 Arbitration 00:09:13.603 =========== 00:09:13.603 Arbitration Burst: no limit 00:09:13.603 00:09:13.603 Power Management 00:09:13.603 ================ 00:09:13.603 Number of Power States: 1 00:09:13.603 Current Power State: Power State #0 00:09:13.603 Power State #0: 00:09:13.603 Max Power: 25.00 W 00:09:13.603 Non-Operational State: Operational 00:09:13.603 Entry Latency: 16 microseconds 00:09:13.603 Exit Latency: 4 microseconds 00:09:13.603 Relative Read Throughput: 0 00:09:13.603 Relative Read Latency: 0 00:09:13.603 Relative Write Throughput: 0 00:09:13.603 Relative Write Latency: 0 00:09:13.603 Idle Power: Not Reported 00:09:13.603 Active Power: Not Reported 00:09:13.603 Non-Operational Permissive Mode: Not Supported 00:09:13.603 00:09:13.603 Health Information 00:09:13.603 ================== 00:09:13.603 Critical Warnings: 00:09:13.603 Available Spare Space: OK 00:09:13.603 Temperature: OK 00:09:13.603 Device Reliability: OK 00:09:13.603 Read Only: No 00:09:13.603 Volatile Memory Backup: OK 00:09:13.603 Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.603 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:13.603 Available Spare: 0% 00:09:13.603 Available Spare Threshold: 0% 00:09:13.603 Life Percentage Used: 0% 00:09:13.603 Data Units Read: 1371 00:09:13.603 Data Units Written: 638 00:09:13.603 Host Read Commands: 62974 00:09:13.603 Host Write Commands: 30996 00:09:13.603 Controller Busy Time: 0 minutes 00:09:13.603 Power Cycles: 0 00:09:13.603 Power On Hours: 0 hours 00:09:13.603 Unsafe Shutdowns: 0 00:09:13.603 Unrecoverable Media Errors: 0 00:09:13.603 Lifetime Error Log Entries: 0 00:09:13.603 Warning Temperature Time: 0 minutes 00:09:13.603 Critical Temperature Time: 0 minutes 00:09:13.603 00:09:13.603 Number of Queues 00:09:13.603 ================ 00:09:13.603 Number of I/O Submission Queues: 64 00:09:13.603 Number of I/O Completion Queues: 64 00:09:13.603 00:09:13.603 ZNS Specific Controller Data 00:09:13.603 ============================ 00:09:13.603 Zone Append Size Limit: 0 00:09:13.603 00:09:13.603 00:09:13.603 Active Namespaces 00:09:13.603 ================= 00:09:13.603 Namespace ID:1 00:09:13.603 Error Recovery Timeout: Unlimited 00:09:13.603 Command Set Identifier: NVM (00h) 00:09:13.603 Deallocate: Supported 00:09:13.603 Deallocated/Unwritten Error: Supported 00:09:13.603 Deallocated Read Value: All 0x00 00:09:13.603 Deallocate in Write Zeroes: Not Supported 00:09:13.603 Deallocated Guard Field: 0xFFFF 00:09:13.603 Flush: Supported 00:09:13.603 Reservation: Not Supported 00:09:13.603 Namespace Sharing Capabilities: Multiple Controllers 00:09:13.603 Size (in LBAs): 262144 (1GiB) 00:09:13.603 Capacity (in LBAs): 262144 (1GiB) 00:09:13.603 Utilization (in LBAs): 262144 (1GiB) 00:09:13.603 Thin Provisioning: Not Supported 00:09:13.603 Per-NS Atomic Units: No 00:09:13.603 Maximum Single Source Range Length: 128 00:09:13.603 Maximum Copy Length: 128 00:09:13.603 Maximum Source Range Count: 128 00:09:13.603 NGUID/EUI64 Never Reused: No 00:09:13.603 Namespace Write Protected: No 00:09:13.603 Endurance group ID: 1 00:09:13.603 Number of LBA Formats: 8 00:09:13.603 Current LBA Format: LBA Format #04 00:09:13.603 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.603 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.603 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.603 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.603 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.603 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.603 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.603 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.603 00:09:13.603 Get Feature FDP: 00:09:13.603 ================ 00:09:13.604 Enabled: Yes 00:09:13.604 FDP configuration index: 0 00:09:13.604 00:09:13.604 FDP configurations log page 00:09:13.604 =========================== 00:09:13.604 Number of FDP configurations: 1 00:09:13.604 Version: 0 00:09:13.604 Size: 112 00:09:13.604 FDP Configuration Descriptor: 0 00:09:13.604 Descriptor Size: 96 00:09:13.604 Reclaim Group Identifier format: 2 00:09:13.604 FDP Volatile Write Cache: Not Present 00:09:13.604 FDP Configuration: Valid 00:09:13.604 Vendor Specific Size: 0 00:09:13.604 Number of Reclaim Groups: 2 00:09:13.604 Number of Recalim Unit Handles: 8 00:09:13.604 Max Placement Identifiers: 128 00:09:13.604 Number of Namespaces Suppprted: 256 00:09:13.604 Reclaim unit Nominal Size: 6000000 bytes 00:09:13.604 Estimated Reclaim Unit Time Limit: Not Reported 00:09:13.604 RUH Desc #000: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #001: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #002: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #003: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #004: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #005: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #006: RUH Type: Initially Isolated 00:09:13.604 RUH Desc #007: RUH Type: Initially Isolated 00:09:13.604 00:09:13.604 FDP reclaim unit handle usage log page 00:09:13.604 =================================[2024-12-16 13:13:28.145010] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63387 terminated unexpected 00:09:13.604 ===== 00:09:13.604 Number of Reclaim Unit Handles: 8 00:09:13.604 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:13.604 RUH Usage Desc #001: RUH Attributes: Unused 00:09:13.604 RUH Usage Desc #002: RUH Attributes: Unused 00:09:13.604 RUH Usage Desc #003: RUH Attributes: Unused 00:09:13.604 RUH Usage Desc #004: RUH Attributes: Unused 00:09:13.604 RUH Usage Desc #005: RUH Attributes: Unused 00:09:13.604 RUH Usage Desc #006: RUH Attributes: Unused 00:09:13.604 RUH Usage Desc #007: RUH Attributes: Unused 00:09:13.604 00:09:13.604 FDP statistics log page 00:09:13.604 ======================= 00:09:13.604 Host bytes with metadata written: 432668672 00:09:13.604 Media bytes with metadata written: 432795648 00:09:13.604 Media bytes erased: 0 00:09:13.604 00:09:13.604 FDP events log page 00:09:13.604 =================== 00:09:13.604 Number of FDP events: 0 00:09:13.604 00:09:13.604 ===================================================== 00:09:13.604 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:13.604 ===================================================== 00:09:13.604 Controller Capabilities/Features 00:09:13.604 ================================ 00:09:13.604 Vendor ID: 1b36 00:09:13.604 Subsystem Vendor ID: 1af4 00:09:13.604 Serial Number: 12340 00:09:13.604 Model Number: QEMU NVMe Ctrl 00:09:13.604 Firmware Version: 8.0.0 00:09:13.604 Recommended Arb Burst: 6 00:09:13.604 IEEE OUI Identifier: 00 54 52 00:09:13.604 Multi-path I/O 00:09:13.604 May have multiple subsystem ports: No 00:09:13.604 May have multiple controllers: No 00:09:13.604 Associated with SR-IOV VF: No 00:09:13.604 Max Data Transfer Size: 524288 00:09:13.604 Max Number of Namespaces: 256 00:09:13.604 Max Number of I/O Queues: 64 00:09:13.604 NVMe Specification Version (VS): 1.4 00:09:13.604 NVMe Specification Version (Identify): 1.4 00:09:13.604 Maximum Queue Entries: 2048 00:09:13.604 Contiguous Queues Required: Yes 00:09:13.604 Arbitration Mechanisms Supported 00:09:13.604 Weighted Round Robin: Not Supported 00:09:13.604 Vendor Specific: Not Supported 00:09:13.604 Reset Timeout: 7500 ms 00:09:13.604 Doorbell Stride: 4 bytes 00:09:13.604 NVM Subsystem Reset: Not Supported 00:09:13.604 Command Sets Supported 00:09:13.604 NVM Command Set: Supported 00:09:13.604 Boot Partition: Not Supported 00:09:13.604 Memory Page Size Minimum: 4096 bytes 00:09:13.604 Memory Page Size Maximum: 65536 bytes 00:09:13.604 Persistent Memory Region: Not Supported 00:09:13.604 Optional Asynchronous Events Supported 00:09:13.604 Namespace Attribute Notices: Supported 00:09:13.604 Firmware Activation Notices: Not Supported 00:09:13.604 ANA Change Notices: Not Supported 00:09:13.604 PLE Aggregate Log Change Notices: Not Supported 00:09:13.604 LBA Status Info Alert Notices: Not Supported 00:09:13.604 EGE Aggregate Log Change Notices: Not Supported 00:09:13.604 Normal NVM Subsystem Shutdown event: Not Supported 00:09:13.604 Zone Descriptor Change Notices: Not Supported 00:09:13.604 Discovery Log Change Notices: Not Supported 00:09:13.604 Controller Attributes 00:09:13.604 128-bit Host Identifier: Not Supported 00:09:13.604 Non-Operational Permissive Mode: Not Supported 00:09:13.604 NVM Sets: Not Supported 00:09:13.604 Read Recovery Levels: Not Supported 00:09:13.604 Endurance Groups: Not Supported 00:09:13.604 Predictable Latency Mode: Not Supported 00:09:13.604 Traffic Based Keep ALive: Not Supported 00:09:13.604 Namespace Granularity: Not Supported 00:09:13.604 SQ Associations: Not Supported 00:09:13.604 UUID List: Not Supported 00:09:13.604 Multi-Domain Subsystem: Not Supported 00:09:13.604 Fixed Capacity Management: Not Supported 00:09:13.604 Variable Capacity Management: Not Supported 00:09:13.604 Delete Endurance Group: Not Supported 00:09:13.604 Delete NVM Set: Not Supported 00:09:13.604 Extended LBA Formats Supported: Supported 00:09:13.604 Flexible Data Placement Supported: Not Supported 00:09:13.604 00:09:13.604 Controller Memory Buffer Support 00:09:13.604 ================================ 00:09:13.604 Supported: No 00:09:13.604 00:09:13.604 Persistent Memory Region Support 00:09:13.604 ================================ 00:09:13.604 Supported: No 00:09:13.604 00:09:13.604 Admin Command Set Attributes 00:09:13.604 ============================ 00:09:13.604 Security Send/Receive: Not Supported 00:09:13.604 Format NVM: Supported 00:09:13.604 Firmware Activate/Download: Not Supported 00:09:13.604 Namespace Management: Supported 00:09:13.604 Device Self-Test: Not Supported 00:09:13.604 Directives: Supported 00:09:13.604 NVMe-MI: Not Supported 00:09:13.604 Virtualization Management: Not Supported 00:09:13.604 Doorbell Buffer Config: Supported 00:09:13.604 Get LBA Status Capability: Not Supported 00:09:13.604 Command & Feature Lockdown Capability: Not Supported 00:09:13.604 Abort Command Limit: 4 00:09:13.604 Async Event Request Limit: 4 00:09:13.604 Number of Firmware Slots: N/A 00:09:13.604 Firmware Slot 1 Read-Only: N/A 00:09:13.604 Firmware Activation Without Reset: N/A 00:09:13.604 Multiple Update Detection Support: N/A 00:09:13.604 Firmware Update Granularity: No Information Provided 00:09:13.604 Per-Namespace SMART Log: Yes 00:09:13.604 Asymmetric Namespace Access Log Page: Not Supported 00:09:13.604 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:13.604 Command Effects Log Page: Supported 00:09:13.604 Get Log Page Extended Data: Supported 00:09:13.604 Telemetry Log Pages: Not Supported 00:09:13.604 Persistent Event Log Pages: Not Supported 00:09:13.604 Supported Log Pages Log Page: May Support 00:09:13.604 Commands Supported & Effects Log Page: Not Supported 00:09:13.604 Feature Identifiers & Effects Log Page:May Support 00:09:13.604 NVMe-MI Commands & Effects Log Page: May Support 00:09:13.604 Data Area 4 for Telemetry Log: Not Supported 00:09:13.604 Error Log Page Entries Supported: 1 00:09:13.604 Keep Alive: Not Supported 00:09:13.604 00:09:13.604 NVM Command Set Attributes 00:09:13.604 ========================== 00:09:13.604 Submission Queue Entry Size 00:09:13.604 Max: 64 00:09:13.604 Min: 64 00:09:13.604 Completion Queue Entry Size 00:09:13.604 Max: 16 00:09:13.604 Min: 16 00:09:13.604 Number of Namespaces: 256 00:09:13.604 Compare Command: Supported 00:09:13.604 Write Uncorrectable Command: Not Supported 00:09:13.604 Dataset Management Command: Supported 00:09:13.604 Write Zeroes Command: Supported 00:09:13.604 Set Features Save Field: Supported 00:09:13.604 Reservations: Not Supported 00:09:13.604 Timestamp: Supported 00:09:13.604 Copy: Supported 00:09:13.604 Volatile Write Cache: Present 00:09:13.604 Atomic Write Unit (Normal): 1 00:09:13.604 Atomic Write Unit (PFail): 1 00:09:13.604 Atomic Compare & Write Unit: 1 00:09:13.604 Fused Compare & Write: Not Supported 00:09:13.604 Scatter-Gather List 00:09:13.604 SGL Command Set: Supported 00:09:13.604 SGL Keyed: Not Supported 00:09:13.604 SGL Bit Bucket Descriptor: Not Supported 00:09:13.604 SGL Metadata Pointer: Not Supported 00:09:13.604 Oversized SGL: Not Supported 00:09:13.604 SGL Metadata Address: Not Supported 00:09:13.604 SGL Offset: Not Supported 00:09:13.604 Transport SGL Data Block: Not Supported 00:09:13.604 Replay Protected Memory Block: Not Supported 00:09:13.604 00:09:13.604 Firmware Slot Information 00:09:13.604 ========================= 00:09:13.604 Active slot: 1 00:09:13.605 Slot 1 Firmware Revision: 1.0 00:09:13.605 00:09:13.605 00:09:13.605 Commands Supported and Effects 00:09:13.605 ============================== 00:09:13.605 Admin Commands 00:09:13.605 -------------- 00:09:13.605 Delete I/O Submission Queue (00h): Supported 00:09:13.605 Create I/O Submission Queue (01h): Supported 00:09:13.605 Get Log Page (02h): Supported 00:09:13.605 Delete I/O Completion Queue (04h): Supported 00:09:13.605 Create I/O Completion Queue (05h): Supported 00:09:13.605 Identify (06h): Supported 00:09:13.605 Abort (08h): Supported 00:09:13.605 Set Features (09h): Supported 00:09:13.605 Get Features (0Ah): Supported 00:09:13.605 Asynchronous Event Request (0Ch): Supported 00:09:13.605 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:13.605 Directive Send (19h): Supported 00:09:13.605 Directive Receive (1Ah): Supported 00:09:13.605 Virtualization Management (1Ch): Supported 00:09:13.605 Doorbell Buffer Config (7Ch): Supported 00:09:13.605 Format NVM (80h): Supported LBA-Change 00:09:13.605 I/O Commands 00:09:13.605 ------------ 00:09:13.605 Flush (00h): Supported LBA-Change 00:09:13.605 Write (01h): Supported LBA-Change 00:09:13.605 Read (02h): Supported 00:09:13.605 Compare (05h): Supported 00:09:13.605 Write Zeroes (08h): Supported LBA-Change 00:09:13.605 Dataset Management (09h): Supported LBA-Change 00:09:13.605 Unknown (0Ch): Supported 00:09:13.605 [2024-12-16 13:13:28.146090] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63387 terminated unexpected 00:09:13.605 Unknown (12h): Supported 00:09:13.605 Copy (19h): Supported LBA-Change 00:09:13.605 Unknown (1Dh): Supported LBA-Change 00:09:13.605 00:09:13.605 Error Log 00:09:13.605 ========= 00:09:13.605 00:09:13.605 Arbitration 00:09:13.605 =========== 00:09:13.605 Arbitration Burst: no limit 00:09:13.605 00:09:13.605 Power Management 00:09:13.605 ================ 00:09:13.605 Number of Power States: 1 00:09:13.605 Current Power State: Power State #0 00:09:13.605 Power State #0: 00:09:13.605 Max Power: 25.00 W 00:09:13.605 Non-Operational State: Operational 00:09:13.605 Entry Latency: 16 microseconds 00:09:13.605 Exit Latency: 4 microseconds 00:09:13.605 Relative Read Throughput: 0 00:09:13.605 Relative Read Latency: 0 00:09:13.605 Relative Write Throughput: 0 00:09:13.605 Relative Write Latency: 0 00:09:13.605 Idle Power: Not Reported 00:09:13.605 Active Power: Not Reported 00:09:13.605 Non-Operational Permissive Mode: Not Supported 00:09:13.605 00:09:13.605 Health Information 00:09:13.605 ================== 00:09:13.605 Critical Warnings: 00:09:13.605 Available Spare Space: OK 00:09:13.605 Temperature: OK 00:09:13.605 Device Reliability: OK 00:09:13.605 Read Only: No 00:09:13.605 Volatile Memory Backup: OK 00:09:13.605 Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.605 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:13.605 Available Spare: 0% 00:09:13.605 Available Spare Threshold: 0% 00:09:13.605 Life Percentage Used: 0% 00:09:13.605 Data Units Read: 1773 00:09:13.605 Data Units Written: 816 00:09:13.605 Host Read Commands: 84302 00:09:13.605 Host Write Commands: 41825 00:09:13.605 Controller Busy Time: 0 minutes 00:09:13.605 Power Cycles: 0 00:09:13.605 Power On Hours: 0 hours 00:09:13.605 Unsafe Shutdowns: 0 00:09:13.605 Unrecoverable Media Errors: 0 00:09:13.605 Lifetime Error Log Entries: 0 00:09:13.605 Warning Temperature Time: 0 minutes 00:09:13.605 Critical Temperature Time: 0 minutes 00:09:13.605 00:09:13.605 Number of Queues 00:09:13.605 ================ 00:09:13.605 Number of I/O Submission Queues: 64 00:09:13.605 Number of I/O Completion Queues: 64 00:09:13.605 00:09:13.605 ZNS Specific Controller Data 00:09:13.605 ============================ 00:09:13.605 Zone Append Size Limit: 0 00:09:13.605 00:09:13.605 00:09:13.605 Active Namespaces 00:09:13.605 ================= 00:09:13.605 Namespace ID:1 00:09:13.605 Error Recovery Timeout: Unlimited 00:09:13.605 Command Set Identifier: NVM (00h) 00:09:13.605 Deallocate: Supported 00:09:13.605 Deallocated/Unwritten Error: Supported 00:09:13.605 Deallocated Read Value: All 0x00 00:09:13.605 Deallocate in Write Zeroes: Not Supported 00:09:13.605 Deallocated Guard Field: 0xFFFF 00:09:13.605 Flush: Supported 00:09:13.605 Reservation: Not Supported 00:09:13.605 Metadata Transferred as: Separate Metadata Buffer 00:09:13.605 Namespace Sharing Capabilities: Private 00:09:13.605 Size (in LBAs): 1548666 (5GiB) 00:09:13.605 Capacity (in LBAs): 1548666 (5GiB) 00:09:13.605 Utilization (in LBAs): 1548666 (5GiB) 00:09:13.605 Thin Provisioning: Not Supported 00:09:13.605 Per-NS Atomic Units: No 00:09:13.605 Maximum Single Source Range Length: 128 00:09:13.605 Maximum Copy Length: 128 00:09:13.605 Maximum Source Range Count: 128 00:09:13.605 NGUID/EUI64 Never Reused: No 00:09:13.605 Namespace Write Protected: No 00:09:13.605 Number of LBA Formats: 8 00:09:13.605 Current LBA Format: LBA Format #07 00:09:13.605 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.605 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.605 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.605 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.605 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.605 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.605 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.605 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.605 00:09:13.605 ===================================================== 00:09:13.605 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:13.605 ===================================================== 00:09:13.605 Controller Capabilities/Features 00:09:13.605 ================================ 00:09:13.605 Vendor ID: 1b36 00:09:13.605 Subsystem Vendor ID: 1af4 00:09:13.605 Serial Number: 12341 00:09:13.605 Model Number: QEMU NVMe Ctrl 00:09:13.605 Firmware Version: 8.0.0 00:09:13.605 Recommended Arb Burst: 6 00:09:13.605 IEEE OUI Identifier: 00 54 52 00:09:13.605 Multi-path I/O 00:09:13.605 May have multiple subsystem ports: No 00:09:13.605 May have multiple controllers: No 00:09:13.605 Associated with SR-IOV VF: No 00:09:13.605 Max Data Transfer Size: 524288 00:09:13.605 Max Number of Namespaces: 256 00:09:13.605 Max Number of I/O Queues: 64 00:09:13.605 NVMe Specification Version (VS): 1.4 00:09:13.605 NVMe Specification Version (Identify): 1.4 00:09:13.605 Maximum Queue Entries: 2048 00:09:13.605 Contiguous Queues Required: Yes 00:09:13.605 Arbitration Mechanisms Supported 00:09:13.605 Weighted Round Robin: Not Supported 00:09:13.605 Vendor Specific: Not Supported 00:09:13.605 Reset Timeout: 7500 ms 00:09:13.605 Doorbell Stride: 4 bytes 00:09:13.605 NVM Subsystem Reset: Not Supported 00:09:13.605 Command Sets Supported 00:09:13.605 NVM Command Set: Supported 00:09:13.605 Boot Partition: Not Supported 00:09:13.605 Memory Page Size Minimum: 4096 bytes 00:09:13.605 Memory Page Size Maximum: 65536 bytes 00:09:13.605 Persistent Memory Region: Not Supported 00:09:13.605 Optional Asynchronous Events Supported 00:09:13.605 Namespace Attribute Notices: Supported 00:09:13.605 Firmware Activation Notices: Not Supported 00:09:13.605 ANA Change Notices: Not Supported 00:09:13.605 PLE Aggregate Log Change Notices: Not Supported 00:09:13.605 LBA Status Info Alert Notices: Not Supported 00:09:13.605 EGE Aggregate Log Change Notices: Not Supported 00:09:13.605 Normal NVM Subsystem Shutdown event: Not Supported 00:09:13.605 Zone Descriptor Change Notices: Not Supported 00:09:13.605 Discovery Log Change Notices: Not Supported 00:09:13.605 Controller Attributes 00:09:13.605 128-bit Host Identifier: Not Supported 00:09:13.605 Non-Operational Permissive Mode: Not Supported 00:09:13.605 NVM Sets: Not Supported 00:09:13.605 Read Recovery Levels: Not Supported 00:09:13.605 Endurance Groups: Not Supported 00:09:13.605 Predictable Latency Mode: Not Supported 00:09:13.605 Traffic Based Keep ALive: Not Supported 00:09:13.605 Namespace Granularity: Not Supported 00:09:13.605 SQ Associations: Not Supported 00:09:13.605 UUID List: Not Supported 00:09:13.605 Multi-Domain Subsystem: Not Supported 00:09:13.605 Fixed Capacity Management: Not Supported 00:09:13.605 Variable Capacity Management: Not Supported 00:09:13.605 Delete Endurance Group: Not Supported 00:09:13.605 Delete NVM Set: Not Supported 00:09:13.605 Extended LBA Formats Supported: Supported 00:09:13.605 Flexible Data Placement Supported: Not Supported 00:09:13.605 00:09:13.605 Controller Memory Buffer Support 00:09:13.605 ================================ 00:09:13.605 Supported: No 00:09:13.605 00:09:13.605 Persistent Memory Region Support 00:09:13.605 ================================ 00:09:13.605 Supported: No 00:09:13.606 00:09:13.606 Admin Command Set Attributes 00:09:13.606 ============================ 00:09:13.606 Security Send/Receive: Not Supported 00:09:13.606 Format NVM: Supported 00:09:13.606 Firmware Activate/Download: Not Supported 00:09:13.606 Namespace Management: Supported 00:09:13.606 Device Self-Test: Not Supported 00:09:13.606 Directives: Supported 00:09:13.606 NVMe-MI: Not Supported 00:09:13.606 Virtualization Management: Not Supported 00:09:13.606 Doorbell Buffer Config: Supported 00:09:13.606 Get LBA Status Capability: Not Supported 00:09:13.606 Command & Feature Lockdown Capability: Not Supported 00:09:13.606 Abort Command Limit: 4 00:09:13.606 Async Event Request Limit: 4 00:09:13.606 Number of Firmware Slots: N/A 00:09:13.606 Firmware Slot 1 Read-Only: N/A 00:09:13.606 Firmware Activation Without Reset: N/A 00:09:13.606 Multiple Update Detection Support: N/A 00:09:13.606 Firmware Update Granularity: No Information Provided 00:09:13.606 Per-Namespace SMART Log: Yes 00:09:13.606 Asymmetric Namespace Access Log Page: Not Supported 00:09:13.606 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:13.606 Command Effects Log Page: Supported 00:09:13.606 Get Log Page Extended Data: Supported 00:09:13.606 Telemetry Log Pages: Not Supported 00:09:13.606 Persistent Event Log Pages: Not Supported 00:09:13.606 Supported Log Pages Log Page: May Support 00:09:13.606 Commands Supported & Effects Log Page: Not Supported 00:09:13.606 Feature Identifiers & Effects Log Page:May Support 00:09:13.606 NVMe-MI Commands & Effects Log Page: May Support 00:09:13.606 Data Area 4 for Telemetry Log: Not Supported 00:09:13.606 Error Log Page Entries Supported: 1 00:09:13.606 Keep Alive: Not Supported 00:09:13.606 00:09:13.606 NVM Command Set Attributes 00:09:13.606 ========================== 00:09:13.606 Submission Queue Entry Size 00:09:13.606 Max: 64 00:09:13.606 Min: 64 00:09:13.606 Completion Queue Entry Size 00:09:13.606 Max: 16 00:09:13.606 Min: 16 00:09:13.606 Number of Namespaces: 256 00:09:13.606 Compare Command: Supported 00:09:13.606 Write Uncorrectable Command: Not Supported 00:09:13.606 Dataset Management Command: Supported 00:09:13.606 Write Zeroes Command: Supported 00:09:13.606 Set Features Save Field: Supported 00:09:13.606 Reservations: Not Supported 00:09:13.606 Timestamp: Supported 00:09:13.606 Copy: Supported 00:09:13.606 Volatile Write Cache: Present 00:09:13.606 Atomic Write Unit (Normal): 1 00:09:13.606 Atomic Write Unit (PFail): 1 00:09:13.606 Atomic Compare & Write Unit: 1 00:09:13.606 Fused Compare & Write: Not Supported 00:09:13.606 Scatter-Gather List 00:09:13.606 SGL Command Set: Supported 00:09:13.606 SGL Keyed: Not Supported 00:09:13.606 SGL Bit Bucket Descriptor: Not Supported 00:09:13.606 SGL Metadata Pointer: Not Supported 00:09:13.606 Oversized SGL: Not Supported 00:09:13.606 SGL Metadata Address: Not Supported 00:09:13.606 SGL Offset: Not Supported 00:09:13.606 Transport SGL Data Block: Not Supported 00:09:13.606 Replay Protected Memory Block: Not Supported 00:09:13.606 00:09:13.606 Firmware Slot Information 00:09:13.606 ========================= 00:09:13.606 Active slot: 1 00:09:13.606 Slot 1 Firmware Revision: 1.0 00:09:13.606 00:09:13.606 00:09:13.606 Commands Supported and Effects 00:09:13.606 ============================== 00:09:13.606 Admin Commands 00:09:13.606 -------------- 00:09:13.606 Delete I/O Submission Queue (00h): Supported 00:09:13.606 Create I/O Submission Queue (01h): Supported 00:09:13.606 Get Log Page (02h): Supported 00:09:13.606 Delete I/O Completion Queue (04h): Supported 00:09:13.606 Create I/O Completion Queue (05h): Supported 00:09:13.606 Identify (06h): Supported 00:09:13.606 Abort (08h): Supported 00:09:13.606 Set Features (09h): Supported 00:09:13.606 Get Features (0Ah): Supported 00:09:13.606 Asynchronous Event Request (0Ch): Supported 00:09:13.606 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:13.606 Directive Send (19h): Supported 00:09:13.606 Directive Receive (1Ah): Supported 00:09:13.606 Virtualization Management (1Ch): Supported 00:09:13.606 Doorbell Buffer Config (7Ch): Supported 00:09:13.606 Format NVM (80h): Supported LBA-Change 00:09:13.606 I/O Commands 00:09:13.606 ------------ 00:09:13.606 Flush (00h): Supported LBA-Change 00:09:13.606 Write (01h): Supported LBA-Change 00:09:13.606 Read (02h): Supported 00:09:13.606 Compare (05h): Supported 00:09:13.606 Write Zeroes (08h): Supported LBA-Change 00:09:13.606 Dataset Management (09h): Supported LBA-Change 00:09:13.606 Unknown (0Ch): Supported 00:09:13.606 Unknown (12h): Supported 00:09:13.606 Copy (19h): Supported LBA-Change 00:09:13.606 Unknown (1Dh): Supported LBA-Change 00:09:13.606 00:09:13.606 Error Log 00:09:13.606 ========= 00:09:13.606 00:09:13.606 Arbitration 00:09:13.606 =========== 00:09:13.606 Arbitration Burst: no limit 00:09:13.606 00:09:13.606 Power Management 00:09:13.606 ================ 00:09:13.606 Number of Power States: 1 00:09:13.606 Current Power State: Power State #0 00:09:13.606 Power State #0: 00:09:13.606 Max Power: 25.00 W 00:09:13.606 Non-Operational State: Operational 00:09:13.606 Entry Latency: 16 microseconds 00:09:13.606 Exit Latency: 4 microseconds 00:09:13.606 Relative Read Throughput: 0 00:09:13.606 Relative Read Latency: 0 00:09:13.606 Relative Write Throughput: 0 00:09:13.606 Relative Write Latency: 0 00:09:13.606 Idle Power: Not Reported 00:09:13.606 Active Power: Not Reported 00:09:13.606 Non-Operational Permissive Mode: Not Supported 00:09:13.606 00:09:13.606 Health Information 00:09:13.606 ================== 00:09:13.606 Critical Warnings: 00:09:13.606 Available Spare Space: OK 00:09:13.606 Temperature: OK 00:09:13.606 Device Reliability: OK 00:09:13.606 Read Only: No 00:09:13.606 Volatile Memory Backup: OK 00:09:13.606 Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.606 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:13.606 Available Spare: 0% 00:09:13.606 Available Spare Threshold: 0% 00:09:13.606 Life Percentage Used: 0% 00:09:13.606 Data Units Read: 1259 00:09:13.606 Data Units Written: [2024-12-16 13:13:28.147128] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63387 terminated unexpected 00:09:13.606 584 00:09:13.606 Host Read Commands: 61848 00:09:13.606 Host Write Commands: 30457 00:09:13.606 Controller Busy Time: 0 minutes 00:09:13.606 Power Cycles: 0 00:09:13.606 Power On Hours: 0 hours 00:09:13.606 Unsafe Shutdowns: 0 00:09:13.606 Unrecoverable Media Errors: 0 00:09:13.606 Lifetime Error Log Entries: 0 00:09:13.606 Warning Temperature Time: 0 minutes 00:09:13.606 Critical Temperature Time: 0 minutes 00:09:13.606 00:09:13.606 Number of Queues 00:09:13.606 ================ 00:09:13.606 Number of I/O Submission Queues: 64 00:09:13.606 Number of I/O Completion Queues: 64 00:09:13.606 00:09:13.606 ZNS Specific Controller Data 00:09:13.606 ============================ 00:09:13.606 Zone Append Size Limit: 0 00:09:13.606 00:09:13.606 00:09:13.606 Active Namespaces 00:09:13.606 ================= 00:09:13.606 Namespace ID:1 00:09:13.606 Error Recovery Timeout: Unlimited 00:09:13.606 Command Set Identifier: NVM (00h) 00:09:13.606 Deallocate: Supported 00:09:13.606 Deallocated/Unwritten Error: Supported 00:09:13.606 Deallocated Read Value: All 0x00 00:09:13.606 Deallocate in Write Zeroes: Not Supported 00:09:13.606 Deallocated Guard Field: 0xFFFF 00:09:13.606 Flush: Supported 00:09:13.606 Reservation: Not Supported 00:09:13.606 Namespace Sharing Capabilities: Private 00:09:13.606 Size (in LBAs): 1310720 (5GiB) 00:09:13.606 Capacity (in LBAs): 1310720 (5GiB) 00:09:13.606 Utilization (in LBAs): 1310720 (5GiB) 00:09:13.606 Thin Provisioning: Not Supported 00:09:13.606 Per-NS Atomic Units: No 00:09:13.606 Maximum Single Source Range Length: 128 00:09:13.606 Maximum Copy Length: 128 00:09:13.606 Maximum Source Range Count: 128 00:09:13.606 NGUID/EUI64 Never Reused: No 00:09:13.606 Namespace Write Protected: No 00:09:13.606 Number of LBA Formats: 8 00:09:13.606 Current LBA Format: LBA Format #04 00:09:13.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.606 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.606 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.606 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.606 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.606 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.606 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.606 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.606 00:09:13.606 ===================================================== 00:09:13.606 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:13.606 ===================================================== 00:09:13.606 Controller Capabilities/Features 00:09:13.606 ================================ 00:09:13.606 Vendor ID: 1b36 00:09:13.606 Subsystem Vendor ID: 1af4 00:09:13.606 Serial Number: 12342 00:09:13.606 Model Number: QEMU NVMe Ctrl 00:09:13.606 Firmware Version: 8.0.0 00:09:13.607 Recommended Arb Burst: 6 00:09:13.607 IEEE OUI Identifier: 00 54 52 00:09:13.607 Multi-path I/O 00:09:13.607 May have multiple subsystem ports: No 00:09:13.607 May have multiple controllers: No 00:09:13.607 Associated with SR-IOV VF: No 00:09:13.607 Max Data Transfer Size: 524288 00:09:13.607 Max Number of Namespaces: 256 00:09:13.607 Max Number of I/O Queues: 64 00:09:13.607 NVMe Specification Version (VS): 1.4 00:09:13.607 NVMe Specification Version (Identify): 1.4 00:09:13.607 Maximum Queue Entries: 2048 00:09:13.607 Contiguous Queues Required: Yes 00:09:13.607 Arbitration Mechanisms Supported 00:09:13.607 Weighted Round Robin: Not Supported 00:09:13.607 Vendor Specific: Not Supported 00:09:13.607 Reset Timeout: 7500 ms 00:09:13.607 Doorbell Stride: 4 bytes 00:09:13.607 NVM Subsystem Reset: Not Supported 00:09:13.607 Command Sets Supported 00:09:13.607 NVM Command Set: Supported 00:09:13.607 Boot Partition: Not Supported 00:09:13.607 Memory Page Size Minimum: 4096 bytes 00:09:13.607 Memory Page Size Maximum: 65536 bytes 00:09:13.607 Persistent Memory Region: Not Supported 00:09:13.607 Optional Asynchronous Events Supported 00:09:13.607 Namespace Attribute Notices: Supported 00:09:13.607 Firmware Activation Notices: Not Supported 00:09:13.607 ANA Change Notices: Not Supported 00:09:13.607 PLE Aggregate Log Change Notices: Not Supported 00:09:13.607 LBA Status Info Alert Notices: Not Supported 00:09:13.607 EGE Aggregate Log Change Notices: Not Supported 00:09:13.607 Normal NVM Subsystem Shutdown event: Not Supported 00:09:13.607 Zone Descriptor Change Notices: Not Supported 00:09:13.607 Discovery Log Change Notices: Not Supported 00:09:13.607 Controller Attributes 00:09:13.607 128-bit Host Identifier: Not Supported 00:09:13.607 Non-Operational Permissive Mode: Not Supported 00:09:13.607 NVM Sets: Not Supported 00:09:13.607 Read Recovery Levels: Not Supported 00:09:13.607 Endurance Groups: Not Supported 00:09:13.607 Predictable Latency Mode: Not Supported 00:09:13.607 Traffic Based Keep ALive: Not Supported 00:09:13.607 Namespace Granularity: Not Supported 00:09:13.607 SQ Associations: Not Supported 00:09:13.607 UUID List: Not Supported 00:09:13.607 Multi-Domain Subsystem: Not Supported 00:09:13.607 Fixed Capacity Management: Not Supported 00:09:13.607 Variable Capacity Management: Not Supported 00:09:13.607 Delete Endurance Group: Not Supported 00:09:13.607 Delete NVM Set: Not Supported 00:09:13.607 Extended LBA Formats Supported: Supported 00:09:13.607 Flexible Data Placement Supported: Not Supported 00:09:13.607 00:09:13.607 Controller Memory Buffer Support 00:09:13.607 ================================ 00:09:13.607 Supported: No 00:09:13.607 00:09:13.607 Persistent Memory Region Support 00:09:13.607 ================================ 00:09:13.607 Supported: No 00:09:13.607 00:09:13.607 Admin Command Set Attributes 00:09:13.607 ============================ 00:09:13.607 Security Send/Receive: Not Supported 00:09:13.607 Format NVM: Supported 00:09:13.607 Firmware Activate/Download: Not Supported 00:09:13.607 Namespace Management: Supported 00:09:13.607 Device Self-Test: Not Supported 00:09:13.607 Directives: Supported 00:09:13.607 NVMe-MI: Not Supported 00:09:13.607 Virtualization Management: Not Supported 00:09:13.607 Doorbell Buffer Config: Supported 00:09:13.607 Get LBA Status Capability: Not Supported 00:09:13.607 Command & Feature Lockdown Capability: Not Supported 00:09:13.607 Abort Command Limit: 4 00:09:13.607 Async Event Request Limit: 4 00:09:13.607 Number of Firmware Slots: N/A 00:09:13.607 Firmware Slot 1 Read-Only: N/A 00:09:13.607 Firmware Activation Without Reset: N/A 00:09:13.607 Multiple Update Detection Support: N/A 00:09:13.607 Firmware Update Granularity: No Information Provided 00:09:13.607 Per-Namespace SMART Log: Yes 00:09:13.607 Asymmetric Namespace Access Log Page: Not Supported 00:09:13.607 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:13.607 Command Effects Log Page: Supported 00:09:13.607 Get Log Page Extended Data: Supported 00:09:13.607 Telemetry Log Pages: Not Supported 00:09:13.607 Persistent Event Log Pages: Not Supported 00:09:13.607 Supported Log Pages Log Page: May Support 00:09:13.607 Commands Supported & Effects Log Page: Not Supported 00:09:13.607 Feature Identifiers & Effects Log Page:May Support 00:09:13.607 NVMe-MI Commands & Effects Log Page: May Support 00:09:13.607 Data Area 4 for Telemetry Log: Not Supported 00:09:13.607 Error Log Page Entries Supported: 1 00:09:13.607 Keep Alive: Not Supported 00:09:13.607 00:09:13.607 NVM Command Set Attributes 00:09:13.607 ========================== 00:09:13.607 Submission Queue Entry Size 00:09:13.607 Max: 64 00:09:13.607 Min: 64 00:09:13.607 Completion Queue Entry Size 00:09:13.607 Max: 16 00:09:13.607 Min: 16 00:09:13.607 Number of Namespaces: 256 00:09:13.607 Compare Command: Supported 00:09:13.607 Write Uncorrectable Command: Not Supported 00:09:13.607 Dataset Management Command: Supported 00:09:13.607 Write Zeroes Command: Supported 00:09:13.607 Set Features Save Field: Supported 00:09:13.607 Reservations: Not Supported 00:09:13.607 Timestamp: Supported 00:09:13.607 Copy: Supported 00:09:13.607 Volatile Write Cache: Present 00:09:13.607 Atomic Write Unit (Normal): 1 00:09:13.607 Atomic Write Unit (PFail): 1 00:09:13.607 Atomic Compare & Write Unit: 1 00:09:13.607 Fused Compare & Write: Not Supported 00:09:13.607 Scatter-Gather List 00:09:13.607 SGL Command Set: Supported 00:09:13.607 SGL Keyed: Not Supported 00:09:13.607 SGL Bit Bucket Descriptor: Not Supported 00:09:13.607 SGL Metadata Pointer: Not Supported 00:09:13.607 Oversized SGL: Not Supported 00:09:13.607 SGL Metadata Address: Not Supported 00:09:13.607 SGL Offset: Not Supported 00:09:13.607 Transport SGL Data Block: Not Supported 00:09:13.607 Replay Protected Memory Block: Not Supported 00:09:13.607 00:09:13.607 Firmware Slot Information 00:09:13.607 ========================= 00:09:13.607 Active slot: 1 00:09:13.607 Slot 1 Firmware Revision: 1.0 00:09:13.607 00:09:13.607 00:09:13.607 Commands Supported and Effects 00:09:13.607 ============================== 00:09:13.607 Admin Commands 00:09:13.607 -------------- 00:09:13.607 Delete I/O Submission Queue (00h): Supported 00:09:13.607 Create I/O Submission Queue (01h): Supported 00:09:13.607 Get Log Page (02h): Supported 00:09:13.607 Delete I/O Completion Queue (04h): Supported 00:09:13.607 Create I/O Completion Queue (05h): Supported 00:09:13.607 Identify (06h): Supported 00:09:13.607 Abort (08h): Supported 00:09:13.607 Set Features (09h): Supported 00:09:13.607 Get Features (0Ah): Supported 00:09:13.607 Asynchronous Event Request (0Ch): Supported 00:09:13.607 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:13.607 Directive Send (19h): Supported 00:09:13.607 Directive Receive (1Ah): Supported 00:09:13.607 Virtualization Management (1Ch): Supported 00:09:13.607 Doorbell Buffer Config (7Ch): Supported 00:09:13.607 Format NVM (80h): Supported LBA-Change 00:09:13.607 I/O Commands 00:09:13.607 ------------ 00:09:13.607 Flush (00h): Supported LBA-Change 00:09:13.607 Write (01h): Supported LBA-Change 00:09:13.607 Read (02h): Supported 00:09:13.607 Compare (05h): Supported 00:09:13.607 Write Zeroes (08h): Supported LBA-Change 00:09:13.607 Dataset Management (09h): Supported LBA-Change 00:09:13.607 Unknown (0Ch): Supported 00:09:13.607 Unknown (12h): Supported 00:09:13.607 Copy (19h): Supported LBA-Change 00:09:13.607 Unknown (1Dh): Supported LBA-Change 00:09:13.607 00:09:13.607 Error Log 00:09:13.608 ========= 00:09:13.608 00:09:13.608 Arbitration 00:09:13.608 =========== 00:09:13.608 Arbitration Burst: no limit 00:09:13.608 00:09:13.608 Power Management 00:09:13.608 ================ 00:09:13.608 Number of Power States: 1 00:09:13.608 Current Power State: Power State #0 00:09:13.608 Power State #0: 00:09:13.608 Max Power: 25.00 W 00:09:13.608 Non-Operational State: Operational 00:09:13.608 Entry Latency: 16 microseconds 00:09:13.608 Exit Latency: 4 microseconds 00:09:13.608 Relative Read Throughput: 0 00:09:13.608 Relative Read Latency: 0 00:09:13.608 Relative Write Throughput: 0 00:09:13.608 Relative Write Latency: 0 00:09:13.608 Idle Power: Not Reported 00:09:13.608 Active Power: Not Reported 00:09:13.608 Non-Operational Permissive Mode: Not Supported 00:09:13.608 00:09:13.608 Health Information 00:09:13.608 ================== 00:09:13.608 Critical Warnings: 00:09:13.608 Available Spare Space: OK 00:09:13.608 Temperature: OK 00:09:13.608 Device Reliability: OK 00:09:13.608 Read Only: No 00:09:13.608 Volatile Memory Backup: OK 00:09:13.608 Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.608 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:13.608 Available Spare: 0% 00:09:13.608 Available Spare Threshold: 0% 00:09:13.608 Life Percentage Used: 0% 00:09:13.608 Data Units Read: 3880 00:09:13.608 Data Units Written: 1795 00:09:13.608 Host Read Commands: 187253 00:09:13.608 Host Write Commands: 92003 00:09:13.608 Controller Busy Time: 0 minutes 00:09:13.608 Power Cycles: 0 00:09:13.608 Power On Hours: 0 hours 00:09:13.608 Unsafe Shutdowns: 0 00:09:13.608 Unrecoverable Media Errors: 0 00:09:13.608 Lifetime Error Log Entries: 0 00:09:13.608 Warning Temperature Time: 0 minutes 00:09:13.608 Critical Temperature Time: 0 minutes 00:09:13.608 00:09:13.608 Number of Queues 00:09:13.608 ================ 00:09:13.608 Number of I/O Submission Queues: 64 00:09:13.608 Number of I/O Completion Queues: 64 00:09:13.608 00:09:13.608 ZNS Specific Controller Data 00:09:13.608 ============================ 00:09:13.608 Zone Append Size Limit: 0 00:09:13.608 00:09:13.608 00:09:13.608 Active Namespaces 00:09:13.608 ================= 00:09:13.608 Namespace ID:1 00:09:13.608 Error Recovery Timeout: Unlimited 00:09:13.608 Command Set Identifier: NVM (00h) 00:09:13.608 Deallocate: Supported 00:09:13.608 Deallocated/Unwritten Error: Supported 00:09:13.608 Deallocated Read Value: All 0x00 00:09:13.608 Deallocate in Write Zeroes: Not Supported 00:09:13.608 Deallocated Guard Field: 0xFFFF 00:09:13.608 Flush: Supported 00:09:13.608 Reservation: Not Supported 00:09:13.608 Namespace Sharing Capabilities: Private 00:09:13.608 Size (in LBAs): 1048576 (4GiB) 00:09:13.608 Capacity (in LBAs): 1048576 (4GiB) 00:09:13.608 Utilization (in LBAs): 1048576 (4GiB) 00:09:13.608 Thin Provisioning: Not Supported 00:09:13.608 Per-NS Atomic Units: No 00:09:13.608 Maximum Single Source Range Length: 128 00:09:13.608 Maximum Copy Length: 128 00:09:13.608 Maximum Source Range Count: 128 00:09:13.608 NGUID/EUI64 Never Reused: No 00:09:13.608 Namespace Write Protected: No 00:09:13.608 Number of LBA Formats: 8 00:09:13.608 Current LBA Format: LBA Format #04 00:09:13.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.608 00:09:13.608 Namespace ID:2 00:09:13.608 Error Recovery Timeout: Unlimited 00:09:13.608 Command Set Identifier: NVM (00h) 00:09:13.608 Deallocate: Supported 00:09:13.608 Deallocated/Unwritten Error: Supported 00:09:13.608 Deallocated Read Value: All 0x00 00:09:13.608 Deallocate in Write Zeroes: Not Supported 00:09:13.608 Deallocated Guard Field: 0xFFFF 00:09:13.608 Flush: Supported 00:09:13.608 Reservation: Not Supported 00:09:13.608 Namespace Sharing Capabilities: Private 00:09:13.608 Size (in LBAs): 1048576 (4GiB) 00:09:13.608 Capacity (in LBAs): 1048576 (4GiB) 00:09:13.608 Utilization (in LBAs): 1048576 (4GiB) 00:09:13.608 Thin Provisioning: Not Supported 00:09:13.608 Per-NS Atomic Units: No 00:09:13.608 Maximum Single Source Range Length: 128 00:09:13.608 Maximum Copy Length: 128 00:09:13.608 Maximum Source Range Count: 128 00:09:13.608 NGUID/EUI64 Never Reused: No 00:09:13.608 Namespace Write Protected: No 00:09:13.608 Number of LBA Formats: 8 00:09:13.608 Current LBA Format: LBA Format #04 00:09:13.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.608 00:09:13.608 Namespace ID:3 00:09:13.608 Error Recovery Timeout: Unlimited 00:09:13.608 Command Set Identifier: NVM (00h) 00:09:13.608 Deallocate: Supported 00:09:13.608 Deallocated/Unwritten Error: Supported 00:09:13.608 Deallocated Read Value: All 0x00 00:09:13.608 Deallocate in Write Zeroes: Not Supported 00:09:13.608 Deallocated Guard Field: 0xFFFF 00:09:13.608 Flush: Supported 00:09:13.608 Reservation: Not Supported 00:09:13.608 Namespace Sharing Capabilities: Private 00:09:13.608 Size (in LBAs): 1048576 (4GiB) 00:09:13.608 Capacity (in LBAs): 1048576 (4GiB) 00:09:13.608 Utilization (in LBAs): 1048576 (4GiB) 00:09:13.608 Thin Provisioning: Not Supported 00:09:13.608 Per-NS Atomic Units: No 00:09:13.608 Maximum Single Source Range Length: 128 00:09:13.608 Maximum Copy Length: 128 00:09:13.608 Maximum Source Range Count: 128 00:09:13.608 NGUID/EUI64 Never Reused: No 00:09:13.608 Namespace Write Protected: No 00:09:13.608 Number of LBA Formats: 8 00:09:13.608 Current LBA Format: LBA Format #04 00:09:13.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.608 00:09:13.870 13:13:28 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:13.871 13:13:28 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:13.871 ===================================================== 00:09:13.871 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:13.871 ===================================================== 00:09:13.871 Controller Capabilities/Features 00:09:13.871 ================================ 00:09:13.871 Vendor ID: 1b36 00:09:13.871 Subsystem Vendor ID: 1af4 00:09:13.871 Serial Number: 12340 00:09:13.871 Model Number: QEMU NVMe Ctrl 00:09:13.871 Firmware Version: 8.0.0 00:09:13.871 Recommended Arb Burst: 6 00:09:13.871 IEEE OUI Identifier: 00 54 52 00:09:13.871 Multi-path I/O 00:09:13.871 May have multiple subsystem ports: No 00:09:13.871 May have multiple controllers: No 00:09:13.871 Associated with SR-IOV VF: No 00:09:13.871 Max Data Transfer Size: 524288 00:09:13.871 Max Number of Namespaces: 256 00:09:13.871 Max Number of I/O Queues: 64 00:09:13.871 NVMe Specification Version (VS): 1.4 00:09:13.871 NVMe Specification Version (Identify): 1.4 00:09:13.871 Maximum Queue Entries: 2048 00:09:13.871 Contiguous Queues Required: Yes 00:09:13.871 Arbitration Mechanisms Supported 00:09:13.871 Weighted Round Robin: Not Supported 00:09:13.871 Vendor Specific: Not Supported 00:09:13.871 Reset Timeout: 7500 ms 00:09:13.871 Doorbell Stride: 4 bytes 00:09:13.871 NVM Subsystem Reset: Not Supported 00:09:13.871 Command Sets Supported 00:09:13.871 NVM Command Set: Supported 00:09:13.871 Boot Partition: Not Supported 00:09:13.871 Memory Page Size Minimum: 4096 bytes 00:09:13.871 Memory Page Size Maximum: 65536 bytes 00:09:13.871 Persistent Memory Region: Not Supported 00:09:13.871 Optional Asynchronous Events Supported 00:09:13.871 Namespace Attribute Notices: Supported 00:09:13.871 Firmware Activation Notices: Not Supported 00:09:13.871 ANA Change Notices: Not Supported 00:09:13.871 PLE Aggregate Log Change Notices: Not Supported 00:09:13.871 LBA Status Info Alert Notices: Not Supported 00:09:13.871 EGE Aggregate Log Change Notices: Not Supported 00:09:13.871 Normal NVM Subsystem Shutdown event: Not Supported 00:09:13.871 Zone Descriptor Change Notices: Not Supported 00:09:13.871 Discovery Log Change Notices: Not Supported 00:09:13.871 Controller Attributes 00:09:13.871 128-bit Host Identifier: Not Supported 00:09:13.871 Non-Operational Permissive Mode: Not Supported 00:09:13.871 NVM Sets: Not Supported 00:09:13.871 Read Recovery Levels: Not Supported 00:09:13.871 Endurance Groups: Not Supported 00:09:13.871 Predictable Latency Mode: Not Supported 00:09:13.871 Traffic Based Keep ALive: Not Supported 00:09:13.871 Namespace Granularity: Not Supported 00:09:13.871 SQ Associations: Not Supported 00:09:13.871 UUID List: Not Supported 00:09:13.871 Multi-Domain Subsystem: Not Supported 00:09:13.871 Fixed Capacity Management: Not Supported 00:09:13.871 Variable Capacity Management: Not Supported 00:09:13.871 Delete Endurance Group: Not Supported 00:09:13.871 Delete NVM Set: Not Supported 00:09:13.871 Extended LBA Formats Supported: Supported 00:09:13.871 Flexible Data Placement Supported: Not Supported 00:09:13.871 00:09:13.871 Controller Memory Buffer Support 00:09:13.871 ================================ 00:09:13.871 Supported: No 00:09:13.871 00:09:13.871 Persistent Memory Region Support 00:09:13.871 ================================ 00:09:13.871 Supported: No 00:09:13.871 00:09:13.871 Admin Command Set Attributes 00:09:13.871 ============================ 00:09:13.871 Security Send/Receive: Not Supported 00:09:13.871 Format NVM: Supported 00:09:13.871 Firmware Activate/Download: Not Supported 00:09:13.871 Namespace Management: Supported 00:09:13.871 Device Self-Test: Not Supported 00:09:13.871 Directives: Supported 00:09:13.871 NVMe-MI: Not Supported 00:09:13.871 Virtualization Management: Not Supported 00:09:13.871 Doorbell Buffer Config: Supported 00:09:13.871 Get LBA Status Capability: Not Supported 00:09:13.871 Command & Feature Lockdown Capability: Not Supported 00:09:13.871 Abort Command Limit: 4 00:09:13.871 Async Event Request Limit: 4 00:09:13.871 Number of Firmware Slots: N/A 00:09:13.871 Firmware Slot 1 Read-Only: N/A 00:09:13.871 Firmware Activation Without Reset: N/A 00:09:13.871 Multiple Update Detection Support: N/A 00:09:13.871 Firmware Update Granularity: No Information Provided 00:09:13.871 Per-Namespace SMART Log: Yes 00:09:13.871 Asymmetric Namespace Access Log Page: Not Supported 00:09:13.871 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:13.871 Command Effects Log Page: Supported 00:09:13.871 Get Log Page Extended Data: Supported 00:09:13.871 Telemetry Log Pages: Not Supported 00:09:13.871 Persistent Event Log Pages: Not Supported 00:09:13.871 Supported Log Pages Log Page: May Support 00:09:13.871 Commands Supported & Effects Log Page: Not Supported 00:09:13.871 Feature Identifiers & Effects Log Page:May Support 00:09:13.871 NVMe-MI Commands & Effects Log Page: May Support 00:09:13.871 Data Area 4 for Telemetry Log: Not Supported 00:09:13.871 Error Log Page Entries Supported: 1 00:09:13.871 Keep Alive: Not Supported 00:09:13.871 00:09:13.871 NVM Command Set Attributes 00:09:13.871 ========================== 00:09:13.871 Submission Queue Entry Size 00:09:13.871 Max: 64 00:09:13.871 Min: 64 00:09:13.871 Completion Queue Entry Size 00:09:13.871 Max: 16 00:09:13.871 Min: 16 00:09:13.871 Number of Namespaces: 256 00:09:13.871 Compare Command: Supported 00:09:13.871 Write Uncorrectable Command: Not Supported 00:09:13.871 Dataset Management Command: Supported 00:09:13.871 Write Zeroes Command: Supported 00:09:13.871 Set Features Save Field: Supported 00:09:13.871 Reservations: Not Supported 00:09:13.871 Timestamp: Supported 00:09:13.871 Copy: Supported 00:09:13.871 Volatile Write Cache: Present 00:09:13.871 Atomic Write Unit (Normal): 1 00:09:13.871 Atomic Write Unit (PFail): 1 00:09:13.871 Atomic Compare & Write Unit: 1 00:09:13.871 Fused Compare & Write: Not Supported 00:09:13.871 Scatter-Gather List 00:09:13.871 SGL Command Set: Supported 00:09:13.871 SGL Keyed: Not Supported 00:09:13.871 SGL Bit Bucket Descriptor: Not Supported 00:09:13.871 SGL Metadata Pointer: Not Supported 00:09:13.871 Oversized SGL: Not Supported 00:09:13.871 SGL Metadata Address: Not Supported 00:09:13.871 SGL Offset: Not Supported 00:09:13.871 Transport SGL Data Block: Not Supported 00:09:13.871 Replay Protected Memory Block: Not Supported 00:09:13.871 00:09:13.871 Firmware Slot Information 00:09:13.871 ========================= 00:09:13.871 Active slot: 1 00:09:13.871 Slot 1 Firmware Revision: 1.0 00:09:13.871 00:09:13.871 00:09:13.871 Commands Supported and Effects 00:09:13.871 ============================== 00:09:13.871 Admin Commands 00:09:13.871 -------------- 00:09:13.871 Delete I/O Submission Queue (00h): Supported 00:09:13.871 Create I/O Submission Queue (01h): Supported 00:09:13.871 Get Log Page (02h): Supported 00:09:13.871 Delete I/O Completion Queue (04h): Supported 00:09:13.871 Create I/O Completion Queue (05h): Supported 00:09:13.871 Identify (06h): Supported 00:09:13.871 Abort (08h): Supported 00:09:13.871 Set Features (09h): Supported 00:09:13.871 Get Features (0Ah): Supported 00:09:13.871 Asynchronous Event Request (0Ch): Supported 00:09:13.871 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:13.871 Directive Send (19h): Supported 00:09:13.871 Directive Receive (1Ah): Supported 00:09:13.871 Virtualization Management (1Ch): Supported 00:09:13.871 Doorbell Buffer Config (7Ch): Supported 00:09:13.871 Format NVM (80h): Supported LBA-Change 00:09:13.871 I/O Commands 00:09:13.871 ------------ 00:09:13.871 Flush (00h): Supported LBA-Change 00:09:13.871 Write (01h): Supported LBA-Change 00:09:13.871 Read (02h): Supported 00:09:13.871 Compare (05h): Supported 00:09:13.871 Write Zeroes (08h): Supported LBA-Change 00:09:13.871 Dataset Management (09h): Supported LBA-Change 00:09:13.871 Unknown (0Ch): Supported 00:09:13.871 Unknown (12h): Supported 00:09:13.871 Copy (19h): Supported LBA-Change 00:09:13.871 Unknown (1Dh): Supported LBA-Change 00:09:13.871 00:09:13.871 Error Log 00:09:13.871 ========= 00:09:13.871 00:09:13.871 Arbitration 00:09:13.871 =========== 00:09:13.871 Arbitration Burst: no limit 00:09:13.871 00:09:13.871 Power Management 00:09:13.871 ================ 00:09:13.871 Number of Power States: 1 00:09:13.871 Current Power State: Power State #0 00:09:13.871 Power State #0: 00:09:13.871 Max Power: 25.00 W 00:09:13.871 Non-Operational State: Operational 00:09:13.871 Entry Latency: 16 microseconds 00:09:13.871 Exit Latency: 4 microseconds 00:09:13.871 Relative Read Throughput: 0 00:09:13.871 Relative Read Latency: 0 00:09:13.871 Relative Write Throughput: 0 00:09:13.871 Relative Write Latency: 0 00:09:13.871 Idle Power: Not Reported 00:09:13.871 Active Power: Not Reported 00:09:13.871 Non-Operational Permissive Mode: Not Supported 00:09:13.871 00:09:13.871 Health Information 00:09:13.871 ================== 00:09:13.871 Critical Warnings: 00:09:13.872 Available Spare Space: OK 00:09:13.872 Temperature: OK 00:09:13.872 Device Reliability: OK 00:09:13.872 Read Only: No 00:09:13.872 Volatile Memory Backup: OK 00:09:13.872 Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.872 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:13.872 Available Spare: 0% 00:09:13.872 Available Spare Threshold: 0% 00:09:13.872 Life Percentage Used: 0% 00:09:13.872 Data Units Read: 1773 00:09:13.872 Data Units Written: 816 00:09:13.872 Host Read Commands: 84302 00:09:13.872 Host Write Commands: 41825 00:09:13.872 Controller Busy Time: 0 minutes 00:09:13.872 Power Cycles: 0 00:09:13.872 Power On Hours: 0 hours 00:09:13.872 Unsafe Shutdowns: 0 00:09:13.872 Unrecoverable Media Errors: 0 00:09:13.872 Lifetime Error Log Entries: 0 00:09:13.872 Warning Temperature Time: 0 minutes 00:09:13.872 Critical Temperature Time: 0 minutes 00:09:13.872 00:09:13.872 Number of Queues 00:09:13.872 ================ 00:09:13.872 Number of I/O Submission Queues: 64 00:09:13.872 Number of I/O Completion Queues: 64 00:09:13.872 00:09:13.872 ZNS Specific Controller Data 00:09:13.872 ============================ 00:09:13.872 Zone Append Size Limit: 0 00:09:13.872 00:09:13.872 00:09:13.872 Active Namespaces 00:09:13.872 ================= 00:09:13.872 Namespace ID:1 00:09:13.872 Error Recovery Timeout: Unlimited 00:09:13.872 Command Set Identifier: NVM (00h) 00:09:13.872 Deallocate: Supported 00:09:13.872 Deallocated/Unwritten Error: Supported 00:09:13.872 Deallocated Read Value: All 0x00 00:09:13.872 Deallocate in Write Zeroes: Not Supported 00:09:13.872 Deallocated Guard Field: 0xFFFF 00:09:13.872 Flush: Supported 00:09:13.872 Reservation: Not Supported 00:09:13.872 Metadata Transferred as: Separate Metadata Buffer 00:09:13.872 Namespace Sharing Capabilities: Private 00:09:13.872 Size (in LBAs): 1548666 (5GiB) 00:09:13.872 Capacity (in LBAs): 1548666 (5GiB) 00:09:13.872 Utilization (in LBAs): 1548666 (5GiB) 00:09:13.872 Thin Provisioning: Not Supported 00:09:13.872 Per-NS Atomic Units: No 00:09:13.872 Maximum Single Source Range Length: 128 00:09:13.872 Maximum Copy Length: 128 00:09:13.872 Maximum Source Range Count: 128 00:09:13.872 NGUID/EUI64 Never Reused: No 00:09:13.872 Namespace Write Protected: No 00:09:13.872 Number of LBA Formats: 8 00:09:13.872 Current LBA Format: LBA Format #07 00:09:13.872 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:13.872 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:13.872 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:13.872 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:13.872 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:13.872 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:13.872 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:13.872 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:13.872 00:09:13.872 13:13:28 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:13.872 13:13:28 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:14.134 ===================================================== 00:09:14.134 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:14.134 ===================================================== 00:09:14.134 Controller Capabilities/Features 00:09:14.134 ================================ 00:09:14.134 Vendor ID: 1b36 00:09:14.134 Subsystem Vendor ID: 1af4 00:09:14.134 Serial Number: 12341 00:09:14.134 Model Number: QEMU NVMe Ctrl 00:09:14.134 Firmware Version: 8.0.0 00:09:14.134 Recommended Arb Burst: 6 00:09:14.134 IEEE OUI Identifier: 00 54 52 00:09:14.134 Multi-path I/O 00:09:14.134 May have multiple subsystem ports: No 00:09:14.134 May have multiple controllers: No 00:09:14.134 Associated with SR-IOV VF: No 00:09:14.134 Max Data Transfer Size: 524288 00:09:14.134 Max Number of Namespaces: 256 00:09:14.134 Max Number of I/O Queues: 64 00:09:14.134 NVMe Specification Version (VS): 1.4 00:09:14.134 NVMe Specification Version (Identify): 1.4 00:09:14.134 Maximum Queue Entries: 2048 00:09:14.134 Contiguous Queues Required: Yes 00:09:14.134 Arbitration Mechanisms Supported 00:09:14.134 Weighted Round Robin: Not Supported 00:09:14.134 Vendor Specific: Not Supported 00:09:14.134 Reset Timeout: 7500 ms 00:09:14.134 Doorbell Stride: 4 bytes 00:09:14.134 NVM Subsystem Reset: Not Supported 00:09:14.134 Command Sets Supported 00:09:14.134 NVM Command Set: Supported 00:09:14.134 Boot Partition: Not Supported 00:09:14.134 Memory Page Size Minimum: 4096 bytes 00:09:14.134 Memory Page Size Maximum: 65536 bytes 00:09:14.134 Persistent Memory Region: Not Supported 00:09:14.134 Optional Asynchronous Events Supported 00:09:14.134 Namespace Attribute Notices: Supported 00:09:14.134 Firmware Activation Notices: Not Supported 00:09:14.134 ANA Change Notices: Not Supported 00:09:14.134 PLE Aggregate Log Change Notices: Not Supported 00:09:14.134 LBA Status Info Alert Notices: Not Supported 00:09:14.134 EGE Aggregate Log Change Notices: Not Supported 00:09:14.134 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.134 Zone Descriptor Change Notices: Not Supported 00:09:14.134 Discovery Log Change Notices: Not Supported 00:09:14.134 Controller Attributes 00:09:14.134 128-bit Host Identifier: Not Supported 00:09:14.134 Non-Operational Permissive Mode: Not Supported 00:09:14.134 NVM Sets: Not Supported 00:09:14.134 Read Recovery Levels: Not Supported 00:09:14.134 Endurance Groups: Not Supported 00:09:14.134 Predictable Latency Mode: Not Supported 00:09:14.134 Traffic Based Keep ALive: Not Supported 00:09:14.134 Namespace Granularity: Not Supported 00:09:14.134 SQ Associations: Not Supported 00:09:14.134 UUID List: Not Supported 00:09:14.134 Multi-Domain Subsystem: Not Supported 00:09:14.134 Fixed Capacity Management: Not Supported 00:09:14.134 Variable Capacity Management: Not Supported 00:09:14.134 Delete Endurance Group: Not Supported 00:09:14.134 Delete NVM Set: Not Supported 00:09:14.134 Extended LBA Formats Supported: Supported 00:09:14.134 Flexible Data Placement Supported: Not Supported 00:09:14.134 00:09:14.134 Controller Memory Buffer Support 00:09:14.134 ================================ 00:09:14.134 Supported: No 00:09:14.134 00:09:14.134 Persistent Memory Region Support 00:09:14.134 ================================ 00:09:14.134 Supported: No 00:09:14.134 00:09:14.134 Admin Command Set Attributes 00:09:14.134 ============================ 00:09:14.134 Security Send/Receive: Not Supported 00:09:14.134 Format NVM: Supported 00:09:14.134 Firmware Activate/Download: Not Supported 00:09:14.134 Namespace Management: Supported 00:09:14.134 Device Self-Test: Not Supported 00:09:14.134 Directives: Supported 00:09:14.134 NVMe-MI: Not Supported 00:09:14.134 Virtualization Management: Not Supported 00:09:14.134 Doorbell Buffer Config: Supported 00:09:14.134 Get LBA Status Capability: Not Supported 00:09:14.134 Command & Feature Lockdown Capability: Not Supported 00:09:14.134 Abort Command Limit: 4 00:09:14.134 Async Event Request Limit: 4 00:09:14.134 Number of Firmware Slots: N/A 00:09:14.134 Firmware Slot 1 Read-Only: N/A 00:09:14.134 Firmware Activation Without Reset: N/A 00:09:14.134 Multiple Update Detection Support: N/A 00:09:14.134 Firmware Update Granularity: No Information Provided 00:09:14.134 Per-Namespace SMART Log: Yes 00:09:14.134 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.134 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:14.134 Command Effects Log Page: Supported 00:09:14.134 Get Log Page Extended Data: Supported 00:09:14.134 Telemetry Log Pages: Not Supported 00:09:14.134 Persistent Event Log Pages: Not Supported 00:09:14.134 Supported Log Pages Log Page: May Support 00:09:14.134 Commands Supported & Effects Log Page: Not Supported 00:09:14.134 Feature Identifiers & Effects Log Page:May Support 00:09:14.134 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.134 Data Area 4 for Telemetry Log: Not Supported 00:09:14.134 Error Log Page Entries Supported: 1 00:09:14.134 Keep Alive: Not Supported 00:09:14.134 00:09:14.134 NVM Command Set Attributes 00:09:14.134 ========================== 00:09:14.135 Submission Queue Entry Size 00:09:14.135 Max: 64 00:09:14.135 Min: 64 00:09:14.135 Completion Queue Entry Size 00:09:14.135 Max: 16 00:09:14.135 Min: 16 00:09:14.135 Number of Namespaces: 256 00:09:14.135 Compare Command: Supported 00:09:14.135 Write Uncorrectable Command: Not Supported 00:09:14.135 Dataset Management Command: Supported 00:09:14.135 Write Zeroes Command: Supported 00:09:14.135 Set Features Save Field: Supported 00:09:14.135 Reservations: Not Supported 00:09:14.135 Timestamp: Supported 00:09:14.135 Copy: Supported 00:09:14.135 Volatile Write Cache: Present 00:09:14.135 Atomic Write Unit (Normal): 1 00:09:14.135 Atomic Write Unit (PFail): 1 00:09:14.135 Atomic Compare & Write Unit: 1 00:09:14.135 Fused Compare & Write: Not Supported 00:09:14.135 Scatter-Gather List 00:09:14.135 SGL Command Set: Supported 00:09:14.135 SGL Keyed: Not Supported 00:09:14.135 SGL Bit Bucket Descriptor: Not Supported 00:09:14.135 SGL Metadata Pointer: Not Supported 00:09:14.135 Oversized SGL: Not Supported 00:09:14.135 SGL Metadata Address: Not Supported 00:09:14.135 SGL Offset: Not Supported 00:09:14.135 Transport SGL Data Block: Not Supported 00:09:14.135 Replay Protected Memory Block: Not Supported 00:09:14.135 00:09:14.135 Firmware Slot Information 00:09:14.135 ========================= 00:09:14.135 Active slot: 1 00:09:14.135 Slot 1 Firmware Revision: 1.0 00:09:14.135 00:09:14.135 00:09:14.135 Commands Supported and Effects 00:09:14.135 ============================== 00:09:14.135 Admin Commands 00:09:14.135 -------------- 00:09:14.135 Delete I/O Submission Queue (00h): Supported 00:09:14.135 Create I/O Submission Queue (01h): Supported 00:09:14.135 Get Log Page (02h): Supported 00:09:14.135 Delete I/O Completion Queue (04h): Supported 00:09:14.135 Create I/O Completion Queue (05h): Supported 00:09:14.135 Identify (06h): Supported 00:09:14.135 Abort (08h): Supported 00:09:14.135 Set Features (09h): Supported 00:09:14.135 Get Features (0Ah): Supported 00:09:14.135 Asynchronous Event Request (0Ch): Supported 00:09:14.135 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.135 Directive Send (19h): Supported 00:09:14.135 Directive Receive (1Ah): Supported 00:09:14.135 Virtualization Management (1Ch): Supported 00:09:14.135 Doorbell Buffer Config (7Ch): Supported 00:09:14.135 Format NVM (80h): Supported LBA-Change 00:09:14.135 I/O Commands 00:09:14.135 ------------ 00:09:14.135 Flush (00h): Supported LBA-Change 00:09:14.135 Write (01h): Supported LBA-Change 00:09:14.135 Read (02h): Supported 00:09:14.135 Compare (05h): Supported 00:09:14.135 Write Zeroes (08h): Supported LBA-Change 00:09:14.135 Dataset Management (09h): Supported LBA-Change 00:09:14.135 Unknown (0Ch): Supported 00:09:14.135 Unknown (12h): Supported 00:09:14.135 Copy (19h): Supported LBA-Change 00:09:14.135 Unknown (1Dh): Supported LBA-Change 00:09:14.135 00:09:14.135 Error Log 00:09:14.135 ========= 00:09:14.135 00:09:14.135 Arbitration 00:09:14.135 =========== 00:09:14.135 Arbitration Burst: no limit 00:09:14.135 00:09:14.135 Power Management 00:09:14.135 ================ 00:09:14.135 Number of Power States: 1 00:09:14.135 Current Power State: Power State #0 00:09:14.135 Power State #0: 00:09:14.135 Max Power: 25.00 W 00:09:14.135 Non-Operational State: Operational 00:09:14.135 Entry Latency: 16 microseconds 00:09:14.135 Exit Latency: 4 microseconds 00:09:14.135 Relative Read Throughput: 0 00:09:14.135 Relative Read Latency: 0 00:09:14.135 Relative Write Throughput: 0 00:09:14.135 Relative Write Latency: 0 00:09:14.135 Idle Power: Not Reported 00:09:14.135 Active Power: Not Reported 00:09:14.135 Non-Operational Permissive Mode: Not Supported 00:09:14.135 00:09:14.135 Health Information 00:09:14.135 ================== 00:09:14.135 Critical Warnings: 00:09:14.135 Available Spare Space: OK 00:09:14.135 Temperature: OK 00:09:14.135 Device Reliability: OK 00:09:14.135 Read Only: No 00:09:14.135 Volatile Memory Backup: OK 00:09:14.135 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.135 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.135 Available Spare: 0% 00:09:14.135 Available Spare Threshold: 0% 00:09:14.135 Life Percentage Used: 0% 00:09:14.135 Data Units Read: 1259 00:09:14.135 Data Units Written: 584 00:09:14.135 Host Read Commands: 61848 00:09:14.135 Host Write Commands: 30457 00:09:14.135 Controller Busy Time: 0 minutes 00:09:14.135 Power Cycles: 0 00:09:14.135 Power On Hours: 0 hours 00:09:14.135 Unsafe Shutdowns: 0 00:09:14.135 Unrecoverable Media Errors: 0 00:09:14.135 Lifetime Error Log Entries: 0 00:09:14.135 Warning Temperature Time: 0 minutes 00:09:14.135 Critical Temperature Time: 0 minutes 00:09:14.135 00:09:14.135 Number of Queues 00:09:14.135 ================ 00:09:14.135 Number of I/O Submission Queues: 64 00:09:14.135 Number of I/O Completion Queues: 64 00:09:14.135 00:09:14.135 ZNS Specific Controller Data 00:09:14.135 ============================ 00:09:14.135 Zone Append Size Limit: 0 00:09:14.135 00:09:14.135 00:09:14.135 Active Namespaces 00:09:14.135 ================= 00:09:14.135 Namespace ID:1 00:09:14.135 Error Recovery Timeout: Unlimited 00:09:14.135 Command Set Identifier: NVM (00h) 00:09:14.135 Deallocate: Supported 00:09:14.135 Deallocated/Unwritten Error: Supported 00:09:14.135 Deallocated Read Value: All 0x00 00:09:14.135 Deallocate in Write Zeroes: Not Supported 00:09:14.135 Deallocated Guard Field: 0xFFFF 00:09:14.135 Flush: Supported 00:09:14.135 Reservation: Not Supported 00:09:14.135 Namespace Sharing Capabilities: Private 00:09:14.135 Size (in LBAs): 1310720 (5GiB) 00:09:14.135 Capacity (in LBAs): 1310720 (5GiB) 00:09:14.135 Utilization (in LBAs): 1310720 (5GiB) 00:09:14.135 Thin Provisioning: Not Supported 00:09:14.135 Per-NS Atomic Units: No 00:09:14.135 Maximum Single Source Range Length: 128 00:09:14.135 Maximum Copy Length: 128 00:09:14.135 Maximum Source Range Count: 128 00:09:14.135 NGUID/EUI64 Never Reused: No 00:09:14.135 Namespace Write Protected: No 00:09:14.135 Number of LBA Formats: 8 00:09:14.135 Current LBA Format: LBA Format #04 00:09:14.135 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.135 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.135 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.135 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.135 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.135 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.135 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.135 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.135 00:09:14.135 13:13:28 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:14.135 13:13:28 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:14.395 ===================================================== 00:09:14.395 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:14.395 ===================================================== 00:09:14.395 Controller Capabilities/Features 00:09:14.395 ================================ 00:09:14.395 Vendor ID: 1b36 00:09:14.395 Subsystem Vendor ID: 1af4 00:09:14.395 Serial Number: 12342 00:09:14.395 Model Number: QEMU NVMe Ctrl 00:09:14.395 Firmware Version: 8.0.0 00:09:14.395 Recommended Arb Burst: 6 00:09:14.395 IEEE OUI Identifier: 00 54 52 00:09:14.395 Multi-path I/O 00:09:14.395 May have multiple subsystem ports: No 00:09:14.395 May have multiple controllers: No 00:09:14.395 Associated with SR-IOV VF: No 00:09:14.395 Max Data Transfer Size: 524288 00:09:14.395 Max Number of Namespaces: 256 00:09:14.395 Max Number of I/O Queues: 64 00:09:14.395 NVMe Specification Version (VS): 1.4 00:09:14.396 NVMe Specification Version (Identify): 1.4 00:09:14.396 Maximum Queue Entries: 2048 00:09:14.396 Contiguous Queues Required: Yes 00:09:14.396 Arbitration Mechanisms Supported 00:09:14.396 Weighted Round Robin: Not Supported 00:09:14.396 Vendor Specific: Not Supported 00:09:14.396 Reset Timeout: 7500 ms 00:09:14.396 Doorbell Stride: 4 bytes 00:09:14.396 NVM Subsystem Reset: Not Supported 00:09:14.396 Command Sets Supported 00:09:14.396 NVM Command Set: Supported 00:09:14.396 Boot Partition: Not Supported 00:09:14.396 Memory Page Size Minimum: 4096 bytes 00:09:14.396 Memory Page Size Maximum: 65536 bytes 00:09:14.396 Persistent Memory Region: Not Supported 00:09:14.396 Optional Asynchronous Events Supported 00:09:14.396 Namespace Attribute Notices: Supported 00:09:14.396 Firmware Activation Notices: Not Supported 00:09:14.396 ANA Change Notices: Not Supported 00:09:14.396 PLE Aggregate Log Change Notices: Not Supported 00:09:14.396 LBA Status Info Alert Notices: Not Supported 00:09:14.396 EGE Aggregate Log Change Notices: Not Supported 00:09:14.396 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.396 Zone Descriptor Change Notices: Not Supported 00:09:14.396 Discovery Log Change Notices: Not Supported 00:09:14.396 Controller Attributes 00:09:14.396 128-bit Host Identifier: Not Supported 00:09:14.396 Non-Operational Permissive Mode: Not Supported 00:09:14.396 NVM Sets: Not Supported 00:09:14.396 Read Recovery Levels: Not Supported 00:09:14.396 Endurance Groups: Not Supported 00:09:14.396 Predictable Latency Mode: Not Supported 00:09:14.396 Traffic Based Keep ALive: Not Supported 00:09:14.396 Namespace Granularity: Not Supported 00:09:14.396 SQ Associations: Not Supported 00:09:14.396 UUID List: Not Supported 00:09:14.396 Multi-Domain Subsystem: Not Supported 00:09:14.396 Fixed Capacity Management: Not Supported 00:09:14.396 Variable Capacity Management: Not Supported 00:09:14.396 Delete Endurance Group: Not Supported 00:09:14.396 Delete NVM Set: Not Supported 00:09:14.396 Extended LBA Formats Supported: Supported 00:09:14.396 Flexible Data Placement Supported: Not Supported 00:09:14.396 00:09:14.396 Controller Memory Buffer Support 00:09:14.396 ================================ 00:09:14.396 Supported: No 00:09:14.396 00:09:14.396 Persistent Memory Region Support 00:09:14.396 ================================ 00:09:14.396 Supported: No 00:09:14.396 00:09:14.396 Admin Command Set Attributes 00:09:14.396 ============================ 00:09:14.396 Security Send/Receive: Not Supported 00:09:14.396 Format NVM: Supported 00:09:14.396 Firmware Activate/Download: Not Supported 00:09:14.396 Namespace Management: Supported 00:09:14.396 Device Self-Test: Not Supported 00:09:14.396 Directives: Supported 00:09:14.396 NVMe-MI: Not Supported 00:09:14.396 Virtualization Management: Not Supported 00:09:14.396 Doorbell Buffer Config: Supported 00:09:14.396 Get LBA Status Capability: Not Supported 00:09:14.396 Command & Feature Lockdown Capability: Not Supported 00:09:14.396 Abort Command Limit: 4 00:09:14.396 Async Event Request Limit: 4 00:09:14.396 Number of Firmware Slots: N/A 00:09:14.396 Firmware Slot 1 Read-Only: N/A 00:09:14.396 Firmware Activation Without Reset: N/A 00:09:14.396 Multiple Update Detection Support: N/A 00:09:14.396 Firmware Update Granularity: No Information Provided 00:09:14.396 Per-Namespace SMART Log: Yes 00:09:14.396 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.396 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:14.396 Command Effects Log Page: Supported 00:09:14.396 Get Log Page Extended Data: Supported 00:09:14.396 Telemetry Log Pages: Not Supported 00:09:14.396 Persistent Event Log Pages: Not Supported 00:09:14.396 Supported Log Pages Log Page: May Support 00:09:14.396 Commands Supported & Effects Log Page: Not Supported 00:09:14.396 Feature Identifiers & Effects Log Page:May Support 00:09:14.396 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.396 Data Area 4 for Telemetry Log: Not Supported 00:09:14.396 Error Log Page Entries Supported: 1 00:09:14.396 Keep Alive: Not Supported 00:09:14.396 00:09:14.396 NVM Command Set Attributes 00:09:14.396 ========================== 00:09:14.396 Submission Queue Entry Size 00:09:14.396 Max: 64 00:09:14.396 Min: 64 00:09:14.396 Completion Queue Entry Size 00:09:14.396 Max: 16 00:09:14.396 Min: 16 00:09:14.396 Number of Namespaces: 256 00:09:14.396 Compare Command: Supported 00:09:14.396 Write Uncorrectable Command: Not Supported 00:09:14.396 Dataset Management Command: Supported 00:09:14.396 Write Zeroes Command: Supported 00:09:14.396 Set Features Save Field: Supported 00:09:14.396 Reservations: Not Supported 00:09:14.396 Timestamp: Supported 00:09:14.396 Copy: Supported 00:09:14.396 Volatile Write Cache: Present 00:09:14.396 Atomic Write Unit (Normal): 1 00:09:14.396 Atomic Write Unit (PFail): 1 00:09:14.396 Atomic Compare & Write Unit: 1 00:09:14.396 Fused Compare & Write: Not Supported 00:09:14.396 Scatter-Gather List 00:09:14.396 SGL Command Set: Supported 00:09:14.396 SGL Keyed: Not Supported 00:09:14.396 SGL Bit Bucket Descriptor: Not Supported 00:09:14.396 SGL Metadata Pointer: Not Supported 00:09:14.396 Oversized SGL: Not Supported 00:09:14.396 SGL Metadata Address: Not Supported 00:09:14.396 SGL Offset: Not Supported 00:09:14.396 Transport SGL Data Block: Not Supported 00:09:14.396 Replay Protected Memory Block: Not Supported 00:09:14.396 00:09:14.396 Firmware Slot Information 00:09:14.396 ========================= 00:09:14.396 Active slot: 1 00:09:14.396 Slot 1 Firmware Revision: 1.0 00:09:14.396 00:09:14.396 00:09:14.396 Commands Supported and Effects 00:09:14.396 ============================== 00:09:14.396 Admin Commands 00:09:14.396 -------------- 00:09:14.396 Delete I/O Submission Queue (00h): Supported 00:09:14.396 Create I/O Submission Queue (01h): Supported 00:09:14.396 Get Log Page (02h): Supported 00:09:14.396 Delete I/O Completion Queue (04h): Supported 00:09:14.396 Create I/O Completion Queue (05h): Supported 00:09:14.396 Identify (06h): Supported 00:09:14.396 Abort (08h): Supported 00:09:14.396 Set Features (09h): Supported 00:09:14.396 Get Features (0Ah): Supported 00:09:14.396 Asynchronous Event Request (0Ch): Supported 00:09:14.396 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.396 Directive Send (19h): Supported 00:09:14.396 Directive Receive (1Ah): Supported 00:09:14.396 Virtualization Management (1Ch): Supported 00:09:14.396 Doorbell Buffer Config (7Ch): Supported 00:09:14.396 Format NVM (80h): Supported LBA-Change 00:09:14.396 I/O Commands 00:09:14.396 ------------ 00:09:14.396 Flush (00h): Supported LBA-Change 00:09:14.396 Write (01h): Supported LBA-Change 00:09:14.396 Read (02h): Supported 00:09:14.396 Compare (05h): Supported 00:09:14.396 Write Zeroes (08h): Supported LBA-Change 00:09:14.396 Dataset Management (09h): Supported LBA-Change 00:09:14.396 Unknown (0Ch): Supported 00:09:14.396 Unknown (12h): Supported 00:09:14.396 Copy (19h): Supported LBA-Change 00:09:14.396 Unknown (1Dh): Supported LBA-Change 00:09:14.396 00:09:14.396 Error Log 00:09:14.396 ========= 00:09:14.396 00:09:14.396 Arbitration 00:09:14.396 =========== 00:09:14.396 Arbitration Burst: no limit 00:09:14.396 00:09:14.396 Power Management 00:09:14.396 ================ 00:09:14.396 Number of Power States: 1 00:09:14.396 Current Power State: Power State #0 00:09:14.396 Power State #0: 00:09:14.396 Max Power: 25.00 W 00:09:14.396 Non-Operational State: Operational 00:09:14.396 Entry Latency: 16 microseconds 00:09:14.396 Exit Latency: 4 microseconds 00:09:14.396 Relative Read Throughput: 0 00:09:14.396 Relative Read Latency: 0 00:09:14.396 Relative Write Throughput: 0 00:09:14.396 Relative Write Latency: 0 00:09:14.396 Idle Power: Not Reported 00:09:14.396 Active Power: Not Reported 00:09:14.396 Non-Operational Permissive Mode: Not Supported 00:09:14.396 00:09:14.396 Health Information 00:09:14.396 ================== 00:09:14.396 Critical Warnings: 00:09:14.396 Available Spare Space: OK 00:09:14.396 Temperature: OK 00:09:14.396 Device Reliability: OK 00:09:14.396 Read Only: No 00:09:14.396 Volatile Memory Backup: OK 00:09:14.396 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.396 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.396 Available Spare: 0% 00:09:14.396 Available Spare Threshold: 0% 00:09:14.396 Life Percentage Used: 0% 00:09:14.396 Data Units Read: 3880 00:09:14.396 Data Units Written: 1795 00:09:14.396 Host Read Commands: 187253 00:09:14.396 Host Write Commands: 92003 00:09:14.396 Controller Busy Time: 0 minutes 00:09:14.396 Power Cycles: 0 00:09:14.396 Power On Hours: 0 hours 00:09:14.396 Unsafe Shutdowns: 0 00:09:14.396 Unrecoverable Media Errors: 0 00:09:14.396 Lifetime Error Log Entries: 0 00:09:14.396 Warning Temperature Time: 0 minutes 00:09:14.396 Critical Temperature Time: 0 minutes 00:09:14.396 00:09:14.396 Number of Queues 00:09:14.396 ================ 00:09:14.396 Number of I/O Submission Queues: 64 00:09:14.397 Number of I/O Completion Queues: 64 00:09:14.397 00:09:14.397 ZNS Specific Controller Data 00:09:14.397 ============================ 00:09:14.397 Zone Append Size Limit: 0 00:09:14.397 00:09:14.397 00:09:14.397 Active Namespaces 00:09:14.397 ================= 00:09:14.397 Namespace ID:1 00:09:14.397 Error Recovery Timeout: Unlimited 00:09:14.397 Command Set Identifier: NVM (00h) 00:09:14.397 Deallocate: Supported 00:09:14.397 Deallocated/Unwritten Error: Supported 00:09:14.397 Deallocated Read Value: All 0x00 00:09:14.397 Deallocate in Write Zeroes: Not Supported 00:09:14.397 Deallocated Guard Field: 0xFFFF 00:09:14.397 Flush: Supported 00:09:14.397 Reservation: Not Supported 00:09:14.397 Namespace Sharing Capabilities: Private 00:09:14.397 Size (in LBAs): 1048576 (4GiB) 00:09:14.397 Capacity (in LBAs): 1048576 (4GiB) 00:09:14.397 Utilization (in LBAs): 1048576 (4GiB) 00:09:14.397 Thin Provisioning: Not Supported 00:09:14.397 Per-NS Atomic Units: No 00:09:14.397 Maximum Single Source Range Length: 128 00:09:14.397 Maximum Copy Length: 128 00:09:14.397 Maximum Source Range Count: 128 00:09:14.397 NGUID/EUI64 Never Reused: No 00:09:14.397 Namespace Write Protected: No 00:09:14.397 Number of LBA Formats: 8 00:09:14.397 Current LBA Format: LBA Format #04 00:09:14.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.397 00:09:14.397 Namespace ID:2 00:09:14.397 Error Recovery Timeout: Unlimited 00:09:14.397 Command Set Identifier: NVM (00h) 00:09:14.397 Deallocate: Supported 00:09:14.397 Deallocated/Unwritten Error: Supported 00:09:14.397 Deallocated Read Value: All 0x00 00:09:14.397 Deallocate in Write Zeroes: Not Supported 00:09:14.397 Deallocated Guard Field: 0xFFFF 00:09:14.397 Flush: Supported 00:09:14.397 Reservation: Not Supported 00:09:14.397 Namespace Sharing Capabilities: Private 00:09:14.397 Size (in LBAs): 1048576 (4GiB) 00:09:14.397 Capacity (in LBAs): 1048576 (4GiB) 00:09:14.397 Utilization (in LBAs): 1048576 (4GiB) 00:09:14.397 Thin Provisioning: Not Supported 00:09:14.397 Per-NS Atomic Units: No 00:09:14.397 Maximum Single Source Range Length: 128 00:09:14.397 Maximum Copy Length: 128 00:09:14.397 Maximum Source Range Count: 128 00:09:14.397 NGUID/EUI64 Never Reused: No 00:09:14.397 Namespace Write Protected: No 00:09:14.397 Number of LBA Formats: 8 00:09:14.397 Current LBA Format: LBA Format #04 00:09:14.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.397 00:09:14.397 Namespace ID:3 00:09:14.397 Error Recovery Timeout: Unlimited 00:09:14.397 Command Set Identifier: NVM (00h) 00:09:14.397 Deallocate: Supported 00:09:14.397 Deallocated/Unwritten Error: Supported 00:09:14.397 Deallocated Read Value: All 0x00 00:09:14.397 Deallocate in Write Zeroes: Not Supported 00:09:14.397 Deallocated Guard Field: 0xFFFF 00:09:14.397 Flush: Supported 00:09:14.397 Reservation: Not Supported 00:09:14.397 Namespace Sharing Capabilities: Private 00:09:14.397 Size (in LBAs): 1048576 (4GiB) 00:09:14.397 Capacity (in LBAs): 1048576 (4GiB) 00:09:14.397 Utilization (in LBAs): 1048576 (4GiB) 00:09:14.397 Thin Provisioning: Not Supported 00:09:14.397 Per-NS Atomic Units: No 00:09:14.397 Maximum Single Source Range Length: 128 00:09:14.397 Maximum Copy Length: 128 00:09:14.397 Maximum Source Range Count: 128 00:09:14.397 NGUID/EUI64 Never Reused: No 00:09:14.397 Namespace Write Protected: No 00:09:14.397 Number of LBA Formats: 8 00:09:14.397 Current LBA Format: LBA Format #04 00:09:14.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.397 00:09:14.397 13:13:28 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:14.397 13:13:28 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:14.656 ===================================================== 00:09:14.656 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:14.656 ===================================================== 00:09:14.656 Controller Capabilities/Features 00:09:14.656 ================================ 00:09:14.656 Vendor ID: 1b36 00:09:14.656 Subsystem Vendor ID: 1af4 00:09:14.656 Serial Number: 12343 00:09:14.656 Model Number: QEMU NVMe Ctrl 00:09:14.656 Firmware Version: 8.0.0 00:09:14.656 Recommended Arb Burst: 6 00:09:14.656 IEEE OUI Identifier: 00 54 52 00:09:14.656 Multi-path I/O 00:09:14.656 May have multiple subsystem ports: No 00:09:14.657 May have multiple controllers: Yes 00:09:14.657 Associated with SR-IOV VF: No 00:09:14.657 Max Data Transfer Size: 524288 00:09:14.657 Max Number of Namespaces: 256 00:09:14.657 Max Number of I/O Queues: 64 00:09:14.657 NVMe Specification Version (VS): 1.4 00:09:14.657 NVMe Specification Version (Identify): 1.4 00:09:14.657 Maximum Queue Entries: 2048 00:09:14.657 Contiguous Queues Required: Yes 00:09:14.657 Arbitration Mechanisms Supported 00:09:14.657 Weighted Round Robin: Not Supported 00:09:14.657 Vendor Specific: Not Supported 00:09:14.657 Reset Timeout: 7500 ms 00:09:14.657 Doorbell Stride: 4 bytes 00:09:14.657 NVM Subsystem Reset: Not Supported 00:09:14.657 Command Sets Supported 00:09:14.657 NVM Command Set: Supported 00:09:14.657 Boot Partition: Not Supported 00:09:14.657 Memory Page Size Minimum: 4096 bytes 00:09:14.657 Memory Page Size Maximum: 65536 bytes 00:09:14.657 Persistent Memory Region: Not Supported 00:09:14.657 Optional Asynchronous Events Supported 00:09:14.657 Namespace Attribute Notices: Supported 00:09:14.657 Firmware Activation Notices: Not Supported 00:09:14.657 ANA Change Notices: Not Supported 00:09:14.657 PLE Aggregate Log Change Notices: Not Supported 00:09:14.657 LBA Status Info Alert Notices: Not Supported 00:09:14.657 EGE Aggregate Log Change Notices: Not Supported 00:09:14.657 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.657 Zone Descriptor Change Notices: Not Supported 00:09:14.657 Discovery Log Change Notices: Not Supported 00:09:14.657 Controller Attributes 00:09:14.657 128-bit Host Identifier: Not Supported 00:09:14.657 Non-Operational Permissive Mode: Not Supported 00:09:14.657 NVM Sets: Not Supported 00:09:14.657 Read Recovery Levels: Not Supported 00:09:14.657 Endurance Groups: Supported 00:09:14.657 Predictable Latency Mode: Not Supported 00:09:14.657 Traffic Based Keep ALive: Not Supported 00:09:14.657 Namespace Granularity: Not Supported 00:09:14.657 SQ Associations: Not Supported 00:09:14.657 UUID List: Not Supported 00:09:14.657 Multi-Domain Subsystem: Not Supported 00:09:14.657 Fixed Capacity Management: Not Supported 00:09:14.657 Variable Capacity Management: Not Supported 00:09:14.657 Delete Endurance Group: Not Supported 00:09:14.657 Delete NVM Set: Not Supported 00:09:14.657 Extended LBA Formats Supported: Supported 00:09:14.657 Flexible Data Placement Supported: Supported 00:09:14.657 00:09:14.657 Controller Memory Buffer Support 00:09:14.657 ================================ 00:09:14.657 Supported: No 00:09:14.657 00:09:14.657 Persistent Memory Region Support 00:09:14.657 ================================ 00:09:14.657 Supported: No 00:09:14.657 00:09:14.657 Admin Command Set Attributes 00:09:14.657 ============================ 00:09:14.657 Security Send/Receive: Not Supported 00:09:14.657 Format NVM: Supported 00:09:14.657 Firmware Activate/Download: Not Supported 00:09:14.657 Namespace Management: Supported 00:09:14.657 Device Self-Test: Not Supported 00:09:14.657 Directives: Supported 00:09:14.657 NVMe-MI: Not Supported 00:09:14.657 Virtualization Management: Not Supported 00:09:14.657 Doorbell Buffer Config: Supported 00:09:14.657 Get LBA Status Capability: Not Supported 00:09:14.657 Command & Feature Lockdown Capability: Not Supported 00:09:14.657 Abort Command Limit: 4 00:09:14.657 Async Event Request Limit: 4 00:09:14.657 Number of Firmware Slots: N/A 00:09:14.657 Firmware Slot 1 Read-Only: N/A 00:09:14.657 Firmware Activation Without Reset: N/A 00:09:14.657 Multiple Update Detection Support: N/A 00:09:14.657 Firmware Update Granularity: No Information Provided 00:09:14.657 Per-Namespace SMART Log: Yes 00:09:14.657 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.657 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:14.657 Command Effects Log Page: Supported 00:09:14.657 Get Log Page Extended Data: Supported 00:09:14.657 Telemetry Log Pages: Not Supported 00:09:14.657 Persistent Event Log Pages: Not Supported 00:09:14.657 Supported Log Pages Log Page: May Support 00:09:14.657 Commands Supported & Effects Log Page: Not Supported 00:09:14.657 Feature Identifiers & Effects Log Page:May Support 00:09:14.657 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.657 Data Area 4 for Telemetry Log: Not Supported 00:09:14.657 Error Log Page Entries Supported: 1 00:09:14.657 Keep Alive: Not Supported 00:09:14.657 00:09:14.657 NVM Command Set Attributes 00:09:14.657 ========================== 00:09:14.657 Submission Queue Entry Size 00:09:14.657 Max: 64 00:09:14.657 Min: 64 00:09:14.657 Completion Queue Entry Size 00:09:14.657 Max: 16 00:09:14.657 Min: 16 00:09:14.657 Number of Namespaces: 256 00:09:14.657 Compare Command: Supported 00:09:14.657 Write Uncorrectable Command: Not Supported 00:09:14.657 Dataset Management Command: Supported 00:09:14.657 Write Zeroes Command: Supported 00:09:14.657 Set Features Save Field: Supported 00:09:14.657 Reservations: Not Supported 00:09:14.657 Timestamp: Supported 00:09:14.657 Copy: Supported 00:09:14.657 Volatile Write Cache: Present 00:09:14.657 Atomic Write Unit (Normal): 1 00:09:14.657 Atomic Write Unit (PFail): 1 00:09:14.657 Atomic Compare & Write Unit: 1 00:09:14.657 Fused Compare & Write: Not Supported 00:09:14.657 Scatter-Gather List 00:09:14.657 SGL Command Set: Supported 00:09:14.657 SGL Keyed: Not Supported 00:09:14.657 SGL Bit Bucket Descriptor: Not Supported 00:09:14.657 SGL Metadata Pointer: Not Supported 00:09:14.657 Oversized SGL: Not Supported 00:09:14.657 SGL Metadata Address: Not Supported 00:09:14.657 SGL Offset: Not Supported 00:09:14.657 Transport SGL Data Block: Not Supported 00:09:14.657 Replay Protected Memory Block: Not Supported 00:09:14.657 00:09:14.657 Firmware Slot Information 00:09:14.657 ========================= 00:09:14.657 Active slot: 1 00:09:14.657 Slot 1 Firmware Revision: 1.0 00:09:14.657 00:09:14.657 00:09:14.657 Commands Supported and Effects 00:09:14.657 ============================== 00:09:14.657 Admin Commands 00:09:14.657 -------------- 00:09:14.657 Delete I/O Submission Queue (00h): Supported 00:09:14.657 Create I/O Submission Queue (01h): Supported 00:09:14.657 Get Log Page (02h): Supported 00:09:14.657 Delete I/O Completion Queue (04h): Supported 00:09:14.657 Create I/O Completion Queue (05h): Supported 00:09:14.657 Identify (06h): Supported 00:09:14.657 Abort (08h): Supported 00:09:14.657 Set Features (09h): Supported 00:09:14.657 Get Features (0Ah): Supported 00:09:14.657 Asynchronous Event Request (0Ch): Supported 00:09:14.657 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.657 Directive Send (19h): Supported 00:09:14.657 Directive Receive (1Ah): Supported 00:09:14.657 Virtualization Management (1Ch): Supported 00:09:14.657 Doorbell Buffer Config (7Ch): Supported 00:09:14.657 Format NVM (80h): Supported LBA-Change 00:09:14.657 I/O Commands 00:09:14.657 ------------ 00:09:14.657 Flush (00h): Supported LBA-Change 00:09:14.657 Write (01h): Supported LBA-Change 00:09:14.657 Read (02h): Supported 00:09:14.657 Compare (05h): Supported 00:09:14.657 Write Zeroes (08h): Supported LBA-Change 00:09:14.657 Dataset Management (09h): Supported LBA-Change 00:09:14.657 Unknown (0Ch): Supported 00:09:14.657 Unknown (12h): Supported 00:09:14.657 Copy (19h): Supported LBA-Change 00:09:14.657 Unknown (1Dh): Supported LBA-Change 00:09:14.657 00:09:14.657 Error Log 00:09:14.657 ========= 00:09:14.657 00:09:14.657 Arbitration 00:09:14.657 =========== 00:09:14.657 Arbitration Burst: no limit 00:09:14.657 00:09:14.657 Power Management 00:09:14.657 ================ 00:09:14.657 Number of Power States: 1 00:09:14.657 Current Power State: Power State #0 00:09:14.657 Power State #0: 00:09:14.657 Max Power: 25.00 W 00:09:14.657 Non-Operational State: Operational 00:09:14.657 Entry Latency: 16 microseconds 00:09:14.657 Exit Latency: 4 microseconds 00:09:14.657 Relative Read Throughput: 0 00:09:14.657 Relative Read Latency: 0 00:09:14.657 Relative Write Throughput: 0 00:09:14.657 Relative Write Latency: 0 00:09:14.657 Idle Power: Not Reported 00:09:14.657 Active Power: Not Reported 00:09:14.657 Non-Operational Permissive Mode: Not Supported 00:09:14.657 00:09:14.657 Health Information 00:09:14.657 ================== 00:09:14.657 Critical Warnings: 00:09:14.658 Available Spare Space: OK 00:09:14.658 Temperature: OK 00:09:14.658 Device Reliability: OK 00:09:14.658 Read Only: No 00:09:14.658 Volatile Memory Backup: OK 00:09:14.658 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.658 Available Spare: 0% 00:09:14.658 Available Spare Threshold: 0% 00:09:14.658 Life Percentage Used: 0% 00:09:14.658 Data Units Read: 1371 00:09:14.658 Data Units Written: 638 00:09:14.658 Host Read Commands: 62974 00:09:14.658 Host Write Commands: 30996 00:09:14.658 Controller Busy Time: 0 minutes 00:09:14.658 Power Cycles: 0 00:09:14.658 Power On Hours: 0 hours 00:09:14.658 Unsafe Shutdowns: 0 00:09:14.658 Unrecoverable Media Errors: 0 00:09:14.658 Lifetime Error Log Entries: 0 00:09:14.658 Warning Temperature Time: 0 minutes 00:09:14.658 Critical Temperature Time: 0 minutes 00:09:14.658 00:09:14.658 Number of Queues 00:09:14.658 ================ 00:09:14.658 Number of I/O Submission Queues: 64 00:09:14.658 Number of I/O Completion Queues: 64 00:09:14.658 00:09:14.658 ZNS Specific Controller Data 00:09:14.658 ============================ 00:09:14.658 Zone Append Size Limit: 0 00:09:14.658 00:09:14.658 00:09:14.658 Active Namespaces 00:09:14.658 ================= 00:09:14.658 Namespace ID:1 00:09:14.658 Error Recovery Timeout: Unlimited 00:09:14.658 Command Set Identifier: NVM (00h) 00:09:14.658 Deallocate: Supported 00:09:14.658 Deallocated/Unwritten Error: Supported 00:09:14.658 Deallocated Read Value: All 0x00 00:09:14.658 Deallocate in Write Zeroes: Not Supported 00:09:14.658 Deallocated Guard Field: 0xFFFF 00:09:14.658 Flush: Supported 00:09:14.658 Reservation: Not Supported 00:09:14.658 Namespace Sharing Capabilities: Multiple Controllers 00:09:14.658 Size (in LBAs): 262144 (1GiB) 00:09:14.658 Capacity (in LBAs): 262144 (1GiB) 00:09:14.658 Utilization (in LBAs): 262144 (1GiB) 00:09:14.658 Thin Provisioning: Not Supported 00:09:14.658 Per-NS Atomic Units: No 00:09:14.658 Maximum Single Source Range Length: 128 00:09:14.658 Maximum Copy Length: 128 00:09:14.658 Maximum Source Range Count: 128 00:09:14.658 NGUID/EUI64 Never Reused: No 00:09:14.658 Namespace Write Protected: No 00:09:14.658 Endurance group ID: 1 00:09:14.658 Number of LBA Formats: 8 00:09:14.658 Current LBA Format: LBA Format #04 00:09:14.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.658 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.658 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.658 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.658 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.658 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.658 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.658 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.658 00:09:14.658 Get Feature FDP: 00:09:14.658 ================ 00:09:14.658 Enabled: Yes 00:09:14.658 FDP configuration index: 0 00:09:14.658 00:09:14.658 FDP configurations log page 00:09:14.658 =========================== 00:09:14.658 Number of FDP configurations: 1 00:09:14.658 Version: 0 00:09:14.658 Size: 112 00:09:14.658 FDP Configuration Descriptor: 0 00:09:14.658 Descriptor Size: 96 00:09:14.658 Reclaim Group Identifier format: 2 00:09:14.658 FDP Volatile Write Cache: Not Present 00:09:14.658 FDP Configuration: Valid 00:09:14.658 Vendor Specific Size: 0 00:09:14.658 Number of Reclaim Groups: 2 00:09:14.658 Number of Recalim Unit Handles: 8 00:09:14.658 Max Placement Identifiers: 128 00:09:14.658 Number of Namespaces Suppprted: 256 00:09:14.658 Reclaim unit Nominal Size: 6000000 bytes 00:09:14.658 Estimated Reclaim Unit Time Limit: Not Reported 00:09:14.658 RUH Desc #000: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #001: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #002: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #003: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #004: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #005: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #006: RUH Type: Initially Isolated 00:09:14.658 RUH Desc #007: RUH Type: Initially Isolated 00:09:14.658 00:09:14.658 FDP reclaim unit handle usage log page 00:09:14.658 ====================================== 00:09:14.658 Number of Reclaim Unit Handles: 8 00:09:14.658 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:14.658 RUH Usage Desc #001: RUH Attributes: Unused 00:09:14.658 RUH Usage Desc #002: RUH Attributes: Unused 00:09:14.658 RUH Usage Desc #003: RUH Attributes: Unused 00:09:14.658 RUH Usage Desc #004: RUH Attributes: Unused 00:09:14.658 RUH Usage Desc #005: RUH Attributes: Unused 00:09:14.658 RUH Usage Desc #006: RUH Attributes: Unused 00:09:14.658 RUH Usage Desc #007: RUH Attributes: Unused 00:09:14.658 00:09:14.658 FDP statistics log page 00:09:14.658 ======================= 00:09:14.658 Host bytes with metadata written: 432668672 00:09:14.658 Media bytes with metadata written: 432795648 00:09:14.658 Media bytes erased: 0 00:09:14.658 00:09:14.658 FDP events log page 00:09:14.658 =================== 00:09:14.658 Number of FDP events: 0 00:09:14.658 00:09:14.658 00:09:14.658 real 0m1.098s 00:09:14.658 user 0m0.362s 00:09:14.658 sys 0m0.505s 00:09:14.658 13:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.658 ************************************ 00:09:14.658 END TEST nvme_identify 00:09:14.658 ************************************ 00:09:14.658 13:13:29 -- common/autotest_common.sh@10 -- # set +x 00:09:14.658 13:13:29 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:14.658 13:13:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.658 13:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.658 13:13:29 -- common/autotest_common.sh@10 -- # set +x 00:09:14.658 ************************************ 00:09:14.658 START TEST nvme_perf 00:09:14.658 ************************************ 00:09:14.658 13:13:29 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:14.658 13:13:29 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:16.036 Initializing NVMe Controllers 00:09:16.036 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:16.036 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:16.036 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:16.036 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:16.036 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:16.036 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:16.036 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:16.036 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:16.036 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:16.036 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:16.036 Initialization complete. Launching workers. 00:09:16.036 ======================================================== 00:09:16.036 Latency(us) 00:09:16.036 Device Information : IOPS MiB/s Average min max 00:09:16.036 PCIE (0000:00:09.0) NSID 1 from core 0: 19355.60 226.82 6610.55 4967.79 30678.89 00:09:16.036 PCIE (0000:00:06.0) NSID 1 from core 0: 19355.60 226.82 6604.36 4825.22 29938.49 00:09:16.036 PCIE (0000:00:07.0) NSID 1 from core 0: 19355.60 226.82 6599.63 4609.80 28391.65 00:09:16.036 PCIE (0000:00:08.0) NSID 1 from core 0: 19355.60 226.82 6594.18 4973.36 27738.51 00:09:16.036 PCIE (0000:00:08.0) NSID 2 from core 0: 19355.60 226.82 6588.93 4969.71 26417.09 00:09:16.036 PCIE (0000:00:08.0) NSID 3 from core 0: 19482.94 228.32 6540.49 4974.81 18582.63 00:09:16.036 ======================================================== 00:09:16.036 Total : 116260.95 1362.43 6589.63 4609.80 30678.89 00:09:16.036 00:09:16.036 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:16.036 ================================================================================= 00:09:16.036 1.00000% : 5142.055us 00:09:16.036 10.00000% : 5394.117us 00:09:16.036 25.00000% : 5721.797us 00:09:16.036 50.00000% : 6225.920us 00:09:16.036 75.00000% : 6704.837us 00:09:16.036 90.00000% : 7662.671us 00:09:16.036 95.00000% : 9981.637us 00:09:16.036 98.00000% : 11443.594us 00:09:16.036 99.00000% : 12754.314us 00:09:16.036 99.50000% : 28634.191us 00:09:16.036 99.90000% : 30247.385us 00:09:16.036 99.99000% : 30650.683us 00:09:16.036 99.99900% : 30852.332us 00:09:16.036 99.99990% : 30852.332us 00:09:16.036 99.99999% : 30852.332us 00:09:16.036 00:09:16.036 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:16.036 ================================================================================= 00:09:16.036 1.00000% : 4990.818us 00:09:16.036 10.00000% : 5293.292us 00:09:16.036 25.00000% : 5646.178us 00:09:16.036 50.00000% : 6225.920us 00:09:16.036 75.00000% : 6805.662us 00:09:16.036 90.00000% : 7813.908us 00:09:16.036 95.00000% : 9679.163us 00:09:16.036 98.00000% : 11494.006us 00:09:16.036 99.00000% : 13006.375us 00:09:16.036 99.50000% : 27424.295us 00:09:16.036 99.90000% : 29642.437us 00:09:16.036 99.99000% : 30045.735us 00:09:16.036 99.99900% : 30045.735us 00:09:16.036 99.99990% : 30045.735us 00:09:16.036 99.99999% : 30045.735us 00:09:16.036 00:09:16.036 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:16.036 ================================================================================= 00:09:16.036 1.00000% : 5116.849us 00:09:16.036 10.00000% : 5394.117us 00:09:16.036 25.00000% : 5721.797us 00:09:16.036 50.00000% : 6225.920us 00:09:16.036 75.00000% : 6704.837us 00:09:16.036 90.00000% : 7965.145us 00:09:16.036 95.00000% : 9427.102us 00:09:16.036 98.00000% : 11796.480us 00:09:16.036 99.00000% : 13409.674us 00:09:16.036 99.50000% : 26214.400us 00:09:16.036 99.90000% : 28029.243us 00:09:16.036 99.99000% : 28432.542us 00:09:16.036 99.99900% : 28432.542us 00:09:16.036 99.99990% : 28432.542us 00:09:16.036 99.99999% : 28432.542us 00:09:16.036 00:09:16.036 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:16.036 ================================================================================= 00:09:16.036 1.00000% : 5167.262us 00:09:16.036 10.00000% : 5394.117us 00:09:16.036 25.00000% : 5721.797us 00:09:16.036 50.00000% : 6225.920us 00:09:16.036 75.00000% : 6704.837us 00:09:16.036 90.00000% : 7813.908us 00:09:16.036 95.00000% : 9578.338us 00:09:16.036 98.00000% : 11897.305us 00:09:16.036 99.00000% : 14216.271us 00:09:16.036 99.50000% : 25407.803us 00:09:16.036 99.90000% : 27424.295us 00:09:16.036 99.99000% : 27827.594us 00:09:16.036 99.99900% : 27827.594us 00:09:16.036 99.99990% : 27827.594us 00:09:16.036 99.99999% : 27827.594us 00:09:16.036 00:09:16.036 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:16.036 ================================================================================= 00:09:16.036 1.00000% : 5167.262us 00:09:16.036 10.00000% : 5394.117us 00:09:16.036 25.00000% : 5721.797us 00:09:16.036 50.00000% : 6225.920us 00:09:16.036 75.00000% : 6704.837us 00:09:16.036 90.00000% : 7561.846us 00:09:16.036 95.00000% : 9779.988us 00:09:16.036 98.00000% : 12048.542us 00:09:16.036 99.00000% : 14216.271us 00:09:16.036 99.50000% : 24097.083us 00:09:16.036 99.90000% : 26012.751us 00:09:16.036 99.99000% : 26416.049us 00:09:16.036 99.99900% : 26617.698us 00:09:16.036 99.99990% : 26617.698us 00:09:16.036 99.99999% : 26617.698us 00:09:16.036 00:09:16.036 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:16.036 ================================================================================= 00:09:16.036 1.00000% : 5167.262us 00:09:16.036 10.00000% : 5419.323us 00:09:16.036 25.00000% : 5721.797us 00:09:16.036 50.00000% : 6225.920us 00:09:16.036 75.00000% : 6704.837us 00:09:16.036 90.00000% : 7612.258us 00:09:16.036 95.00000% : 9830.400us 00:09:16.036 98.00000% : 11746.068us 00:09:16.036 99.00000% : 13712.148us 00:09:16.036 99.50000% : 16333.588us 00:09:16.036 99.90000% : 18148.431us 00:09:16.036 99.99000% : 18652.554us 00:09:16.036 99.99900% : 18652.554us 00:09:16.036 99.99990% : 18652.554us 00:09:16.036 99.99999% : 18652.554us 00:09:16.036 00:09:16.036 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:16.036 ============================================================================== 00:09:16.036 Range in us Cumulative IO count 00:09:16.036 4965.612 - 4990.818: 0.0154% ( 3) 00:09:16.036 4990.818 - 5016.025: 0.0360% ( 4) 00:09:16.036 5016.025 - 5041.231: 0.1079% ( 14) 00:09:16.036 5041.231 - 5066.437: 0.2673% ( 31) 00:09:16.036 5066.437 - 5091.643: 0.4420% ( 34) 00:09:16.036 5091.643 - 5116.849: 0.7093% ( 52) 00:09:16.036 5116.849 - 5142.055: 1.1051% ( 77) 00:09:16.036 5142.055 - 5167.262: 1.5985% ( 96) 00:09:16.036 5167.262 - 5192.468: 2.2358% ( 124) 00:09:16.036 5192.468 - 5217.674: 3.0376% ( 156) 00:09:16.036 5217.674 - 5242.880: 3.9782% ( 183) 00:09:16.036 5242.880 - 5268.086: 4.9805% ( 195) 00:09:16.036 5268.086 - 5293.292: 6.0238% ( 203) 00:09:16.036 5293.292 - 5318.498: 7.0312% ( 196) 00:09:16.036 5318.498 - 5343.705: 8.0644% ( 201) 00:09:16.036 5343.705 - 5368.911: 9.1334% ( 208) 00:09:16.037 5368.911 - 5394.117: 10.2025% ( 208) 00:09:16.037 5394.117 - 5419.323: 11.2664% ( 207) 00:09:16.037 5419.323 - 5444.529: 12.4178% ( 224) 00:09:16.037 5444.529 - 5469.735: 13.5228% ( 215) 00:09:16.037 5469.735 - 5494.942: 14.6844% ( 226) 00:09:16.037 5494.942 - 5520.148: 15.9025% ( 237) 00:09:16.037 5520.148 - 5545.354: 17.1053% ( 234) 00:09:16.037 5545.354 - 5570.560: 18.3131% ( 235) 00:09:16.037 5570.560 - 5595.766: 19.4953% ( 230) 00:09:16.037 5595.766 - 5620.972: 20.6877% ( 232) 00:09:16.037 5620.972 - 5646.178: 21.8699% ( 230) 00:09:16.037 5646.178 - 5671.385: 23.1086% ( 241) 00:09:16.037 5671.385 - 5696.591: 24.3267% ( 237) 00:09:16.037 5696.591 - 5721.797: 25.5345% ( 235) 00:09:16.037 5721.797 - 5747.003: 26.7887% ( 244) 00:09:16.037 5747.003 - 5772.209: 28.0582% ( 247) 00:09:16.037 5772.209 - 5797.415: 29.3329% ( 248) 00:09:16.037 5797.415 - 5822.622: 30.5561% ( 238) 00:09:16.037 5822.622 - 5847.828: 31.8462% ( 251) 00:09:16.037 5847.828 - 5873.034: 33.1157% ( 247) 00:09:16.037 5873.034 - 5898.240: 34.3801% ( 246) 00:09:16.037 5898.240 - 5923.446: 35.6497% ( 247) 00:09:16.037 5923.446 - 5948.652: 36.9141% ( 246) 00:09:16.037 5948.652 - 5973.858: 38.2093% ( 252) 00:09:16.037 5973.858 - 5999.065: 39.5045% ( 252) 00:09:16.037 5999.065 - 6024.271: 40.8357% ( 259) 00:09:16.037 6024.271 - 6049.477: 42.1515% ( 256) 00:09:16.037 6049.477 - 6074.683: 43.4879% ( 260) 00:09:16.037 6074.683 - 6099.889: 44.7780% ( 251) 00:09:16.037 6099.889 - 6125.095: 46.0835% ( 254) 00:09:16.037 6125.095 - 6150.302: 47.3838% ( 253) 00:09:16.037 6150.302 - 6175.508: 48.6791% ( 252) 00:09:16.037 6175.508 - 6200.714: 49.9949% ( 256) 00:09:16.037 6200.714 - 6225.920: 51.2901% ( 252) 00:09:16.037 6225.920 - 6251.126: 52.6007% ( 255) 00:09:16.037 6251.126 - 6276.332: 53.9217% ( 257) 00:09:16.037 6276.332 - 6301.538: 55.2477% ( 258) 00:09:16.037 6301.538 - 6326.745: 56.5738% ( 258) 00:09:16.037 6326.745 - 6351.951: 57.8999% ( 258) 00:09:16.037 6351.951 - 6377.157: 59.1951% ( 252) 00:09:16.037 6377.157 - 6402.363: 60.5315% ( 260) 00:09:16.037 6402.363 - 6427.569: 61.8472% ( 256) 00:09:16.037 6427.569 - 6452.775: 63.1630% ( 256) 00:09:16.037 6452.775 - 6503.188: 65.8049% ( 514) 00:09:16.037 6503.188 - 6553.600: 68.4622% ( 517) 00:09:16.037 6553.600 - 6604.012: 71.0938% ( 512) 00:09:16.037 6604.012 - 6654.425: 73.7048% ( 508) 00:09:16.037 6654.425 - 6704.837: 76.2850% ( 502) 00:09:16.037 6704.837 - 6755.249: 78.6852% ( 467) 00:09:16.037 6755.249 - 6805.662: 80.8234% ( 416) 00:09:16.037 6805.662 - 6856.074: 82.5863% ( 343) 00:09:16.037 6856.074 - 6906.486: 83.8250% ( 241) 00:09:16.037 6906.486 - 6956.898: 84.6680% ( 164) 00:09:16.037 6956.898 - 7007.311: 85.3773% ( 138) 00:09:16.037 7007.311 - 7057.723: 86.0506% ( 131) 00:09:16.037 7057.723 - 7108.135: 86.6674% ( 120) 00:09:16.037 7108.135 - 7158.548: 87.2533% ( 114) 00:09:16.037 7158.548 - 7208.960: 87.7621% ( 99) 00:09:16.037 7208.960 - 7259.372: 88.1579% ( 77) 00:09:16.037 7259.372 - 7309.785: 88.5485% ( 76) 00:09:16.037 7309.785 - 7360.197: 88.9083% ( 70) 00:09:16.037 7360.197 - 7410.609: 89.1807% ( 53) 00:09:16.037 7410.609 - 7461.022: 89.4274% ( 48) 00:09:16.037 7461.022 - 7511.434: 89.6022% ( 34) 00:09:16.037 7511.434 - 7561.846: 89.8026% ( 39) 00:09:16.037 7561.846 - 7612.258: 89.9620% ( 31) 00:09:16.037 7612.258 - 7662.671: 90.1316% ( 33) 00:09:16.037 7662.671 - 7713.083: 90.2601% ( 25) 00:09:16.037 7713.083 - 7763.495: 90.3834% ( 24) 00:09:16.037 7763.495 - 7813.908: 90.5016% ( 23) 00:09:16.037 7813.908 - 7864.320: 90.6301% ( 25) 00:09:16.037 7864.320 - 7914.732: 90.7689% ( 27) 00:09:16.037 7914.732 - 7965.145: 90.8974% ( 25) 00:09:16.037 7965.145 - 8015.557: 91.0362% ( 27) 00:09:16.037 8015.557 - 8065.969: 91.1595% ( 24) 00:09:16.037 8065.969 - 8116.382: 91.2932% ( 26) 00:09:16.037 8116.382 - 8166.794: 91.4217% ( 25) 00:09:16.037 8166.794 - 8217.206: 91.5502% ( 25) 00:09:16.037 8217.206 - 8267.618: 91.6941% ( 28) 00:09:16.037 8267.618 - 8318.031: 91.8329% ( 27) 00:09:16.037 8318.031 - 8368.443: 91.9665% ( 26) 00:09:16.037 8368.443 - 8418.855: 92.0950% ( 25) 00:09:16.037 8418.855 - 8469.268: 92.2389% ( 28) 00:09:16.037 8469.268 - 8519.680: 92.3828% ( 28) 00:09:16.037 8519.680 - 8570.092: 92.5113% ( 25) 00:09:16.037 8570.092 - 8620.505: 92.6398% ( 25) 00:09:16.037 8620.505 - 8670.917: 92.7632% ( 24) 00:09:16.037 8670.917 - 8721.329: 92.8762% ( 22) 00:09:16.037 8721.329 - 8771.742: 92.9842% ( 21) 00:09:16.037 8771.742 - 8822.154: 93.0870% ( 20) 00:09:16.037 8822.154 - 8872.566: 93.1692% ( 16) 00:09:16.037 8872.566 - 8922.978: 93.2566% ( 17) 00:09:16.037 8922.978 - 8973.391: 93.3440% ( 17) 00:09:16.037 8973.391 - 9023.803: 93.4262% ( 16) 00:09:16.037 9023.803 - 9074.215: 93.5136% ( 17) 00:09:16.037 9074.215 - 9124.628: 93.6009% ( 17) 00:09:16.037 9124.628 - 9175.040: 93.6832% ( 16) 00:09:16.037 9175.040 - 9225.452: 93.7654% ( 16) 00:09:16.037 9225.452 - 9275.865: 93.8528% ( 17) 00:09:16.037 9275.865 - 9326.277: 93.9350% ( 16) 00:09:16.037 9326.277 - 9376.689: 94.0070% ( 14) 00:09:16.037 9376.689 - 9427.102: 94.0841% ( 15) 00:09:16.037 9427.102 - 9477.514: 94.1509% ( 13) 00:09:16.037 9477.514 - 9527.926: 94.2331% ( 16) 00:09:16.037 9527.926 - 9578.338: 94.3051% ( 14) 00:09:16.037 9578.338 - 9628.751: 94.3976% ( 18) 00:09:16.037 9628.751 - 9679.163: 94.5056% ( 21) 00:09:16.037 9679.163 - 9729.575: 94.6032% ( 19) 00:09:16.037 9729.575 - 9779.988: 94.7009% ( 19) 00:09:16.037 9779.988 - 9830.400: 94.7882% ( 17) 00:09:16.037 9830.400 - 9880.812: 94.8859% ( 19) 00:09:16.037 9880.812 - 9931.225: 94.9784% ( 18) 00:09:16.037 9931.225 - 9981.637: 95.0555% ( 15) 00:09:16.037 9981.637 - 10032.049: 95.1480% ( 18) 00:09:16.037 10032.049 - 10082.462: 95.2354% ( 17) 00:09:16.037 10082.462 - 10132.874: 95.3433% ( 21) 00:09:16.037 10132.874 - 10183.286: 95.4461% ( 20) 00:09:16.037 10183.286 - 10233.698: 95.5541% ( 21) 00:09:16.037 10233.698 - 10284.111: 95.6620% ( 21) 00:09:16.037 10284.111 - 10334.523: 95.7751% ( 22) 00:09:16.037 10334.523 - 10384.935: 95.8882% ( 22) 00:09:16.037 10384.935 - 10435.348: 96.0012% ( 22) 00:09:16.037 10435.348 - 10485.760: 96.1092% ( 21) 00:09:16.037 10485.760 - 10536.172: 96.2171% ( 21) 00:09:16.037 10536.172 - 10586.585: 96.3199% ( 20) 00:09:16.037 10586.585 - 10636.997: 96.4330% ( 22) 00:09:16.037 10636.997 - 10687.409: 96.5409% ( 21) 00:09:16.037 10687.409 - 10737.822: 96.6488% ( 21) 00:09:16.037 10737.822 - 10788.234: 96.7568% ( 21) 00:09:16.037 10788.234 - 10838.646: 96.8493% ( 18) 00:09:16.037 10838.646 - 10889.058: 96.9521% ( 20) 00:09:16.037 10889.058 - 10939.471: 97.0600% ( 21) 00:09:16.037 10939.471 - 10989.883: 97.1628% ( 20) 00:09:16.037 10989.883 - 11040.295: 97.2708% ( 21) 00:09:16.037 11040.295 - 11090.708: 97.3838% ( 22) 00:09:16.037 11090.708 - 11141.120: 97.4866% ( 20) 00:09:16.037 11141.120 - 11191.532: 97.5946% ( 21) 00:09:16.037 11191.532 - 11241.945: 97.7025% ( 21) 00:09:16.037 11241.945 - 11292.357: 97.8207% ( 23) 00:09:16.037 11292.357 - 11342.769: 97.9081% ( 17) 00:09:16.037 11342.769 - 11393.182: 97.9955% ( 17) 00:09:16.037 11393.182 - 11443.594: 98.0572% ( 12) 00:09:16.037 11443.594 - 11494.006: 98.1137% ( 11) 00:09:16.037 11494.006 - 11544.418: 98.1600% ( 9) 00:09:16.037 11544.418 - 11594.831: 98.2062% ( 9) 00:09:16.037 11594.831 - 11645.243: 98.2576% ( 10) 00:09:16.037 11645.243 - 11695.655: 98.3193% ( 12) 00:09:16.037 11695.655 - 11746.068: 98.3655% ( 9) 00:09:16.037 11746.068 - 11796.480: 98.4221% ( 11) 00:09:16.037 11796.480 - 11846.892: 98.4735% ( 10) 00:09:16.037 11846.892 - 11897.305: 98.5197% ( 9) 00:09:16.037 11897.305 - 11947.717: 98.5506% ( 6) 00:09:16.037 11947.717 - 11998.129: 98.5814% ( 6) 00:09:16.037 11998.129 - 12048.542: 98.6123% ( 6) 00:09:16.037 12048.542 - 12098.954: 98.6431% ( 6) 00:09:16.037 12098.954 - 12149.366: 98.6791% ( 7) 00:09:16.037 12149.366 - 12199.778: 98.7048% ( 5) 00:09:16.037 12199.778 - 12250.191: 98.7407% ( 7) 00:09:16.037 12250.191 - 12300.603: 98.7716% ( 6) 00:09:16.037 12300.603 - 12351.015: 98.8024% ( 6) 00:09:16.037 12351.015 - 12401.428: 98.8333% ( 6) 00:09:16.037 12401.428 - 12451.840: 98.8641% ( 6) 00:09:16.037 12451.840 - 12502.252: 98.8949% ( 6) 00:09:16.037 12502.252 - 12552.665: 98.9258% ( 6) 00:09:16.037 12552.665 - 12603.077: 98.9463% ( 4) 00:09:16.037 12603.077 - 12653.489: 98.9669% ( 4) 00:09:16.037 12653.489 - 12703.902: 98.9875% ( 4) 00:09:16.037 12703.902 - 12754.314: 99.0029% ( 3) 00:09:16.037 12754.314 - 12804.726: 99.0080% ( 1) 00:09:16.037 12804.726 - 12855.138: 99.0234% ( 3) 00:09:16.037 12855.138 - 12905.551: 99.0337% ( 2) 00:09:16.037 12905.551 - 13006.375: 99.0543% ( 4) 00:09:16.037 13006.375 - 13107.200: 99.0748% ( 4) 00:09:16.037 13107.200 - 13208.025: 99.0954% ( 4) 00:09:16.037 13208.025 - 13308.849: 99.1211% ( 5) 00:09:16.037 13308.849 - 13409.674: 99.1417% ( 4) 00:09:16.038 13409.674 - 13510.498: 99.1622% ( 4) 00:09:16.038 13510.498 - 13611.323: 99.1828% ( 4) 00:09:16.038 13611.323 - 13712.148: 99.2033% ( 4) 00:09:16.038 13712.148 - 13812.972: 99.2239% ( 4) 00:09:16.038 13812.972 - 13913.797: 99.2496% ( 5) 00:09:16.038 13913.797 - 14014.622: 99.2701% ( 4) 00:09:16.038 14014.622 - 14115.446: 99.2907% ( 4) 00:09:16.038 14115.446 - 14216.271: 99.3061% ( 3) 00:09:16.038 14216.271 - 14317.095: 99.3318% ( 5) 00:09:16.038 14317.095 - 14417.920: 99.3421% ( 2) 00:09:16.038 27625.945 - 27827.594: 99.3524% ( 2) 00:09:16.038 27827.594 - 28029.243: 99.3986% ( 9) 00:09:16.038 28029.243 - 28230.892: 99.4398% ( 8) 00:09:16.038 28230.892 - 28432.542: 99.4860% ( 9) 00:09:16.038 28432.542 - 28634.191: 99.5323% ( 9) 00:09:16.038 28634.191 - 28835.840: 99.5785% ( 9) 00:09:16.038 28835.840 - 29037.489: 99.6248% ( 9) 00:09:16.038 29037.489 - 29239.138: 99.6659% ( 8) 00:09:16.038 29239.138 - 29440.788: 99.7122% ( 9) 00:09:16.038 29440.788 - 29642.437: 99.7584% ( 9) 00:09:16.038 29642.437 - 29844.086: 99.8047% ( 9) 00:09:16.038 29844.086 - 30045.735: 99.8509% ( 9) 00:09:16.038 30045.735 - 30247.385: 99.9023% ( 10) 00:09:16.038 30247.385 - 30449.034: 99.9486% ( 9) 00:09:16.038 30449.034 - 30650.683: 99.9949% ( 9) 00:09:16.038 30650.683 - 30852.332: 100.0000% ( 1) 00:09:16.038 00:09:16.038 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:16.038 ============================================================================== 00:09:16.038 Range in us Cumulative IO count 00:09:16.038 4814.375 - 4839.582: 0.0051% ( 1) 00:09:16.038 4839.582 - 4864.788: 0.0308% ( 5) 00:09:16.038 4864.788 - 4889.994: 0.0925% ( 12) 00:09:16.038 4889.994 - 4915.200: 0.1850% ( 18) 00:09:16.038 4915.200 - 4940.406: 0.3238% ( 27) 00:09:16.038 4940.406 - 4965.612: 0.5962% ( 53) 00:09:16.038 4965.612 - 4990.818: 1.0125% ( 81) 00:09:16.038 4990.818 - 5016.025: 1.4751% ( 90) 00:09:16.038 5016.025 - 5041.231: 2.0045% ( 103) 00:09:16.038 5041.231 - 5066.437: 2.6573% ( 127) 00:09:16.038 5066.437 - 5091.643: 3.5105% ( 166) 00:09:16.038 5091.643 - 5116.849: 4.3637% ( 166) 00:09:16.038 5116.849 - 5142.055: 5.2220% ( 167) 00:09:16.038 5142.055 - 5167.262: 6.1112% ( 173) 00:09:16.038 5167.262 - 5192.468: 7.0775% ( 188) 00:09:16.038 5192.468 - 5217.674: 7.9410% ( 168) 00:09:16.038 5217.674 - 5242.880: 8.8507% ( 177) 00:09:16.038 5242.880 - 5268.086: 9.7913% ( 183) 00:09:16.038 5268.086 - 5293.292: 10.7062% ( 178) 00:09:16.038 5293.292 - 5318.498: 11.7290% ( 199) 00:09:16.038 5318.498 - 5343.705: 12.6696% ( 183) 00:09:16.038 5343.705 - 5368.911: 13.6205% ( 185) 00:09:16.038 5368.911 - 5394.117: 14.5868% ( 188) 00:09:16.038 5394.117 - 5419.323: 15.6147% ( 200) 00:09:16.038 5419.323 - 5444.529: 16.6684% ( 205) 00:09:16.038 5444.529 - 5469.735: 17.7169% ( 204) 00:09:16.038 5469.735 - 5494.942: 18.7706% ( 205) 00:09:16.038 5494.942 - 5520.148: 19.7934% ( 199) 00:09:16.038 5520.148 - 5545.354: 20.8676% ( 209) 00:09:16.038 5545.354 - 5570.560: 21.9418% ( 209) 00:09:16.038 5570.560 - 5595.766: 22.9749% ( 201) 00:09:16.038 5595.766 - 5620.972: 24.0029% ( 200) 00:09:16.038 5620.972 - 5646.178: 25.1336% ( 220) 00:09:16.038 5646.178 - 5671.385: 26.1924% ( 206) 00:09:16.038 5671.385 - 5696.591: 27.2975% ( 215) 00:09:16.038 5696.591 - 5721.797: 28.3563% ( 206) 00:09:16.038 5721.797 - 5747.003: 29.4151% ( 206) 00:09:16.038 5747.003 - 5772.209: 30.4636% ( 204) 00:09:16.038 5772.209 - 5797.415: 31.6046% ( 222) 00:09:16.038 5797.415 - 5822.622: 32.6789% ( 209) 00:09:16.038 5822.622 - 5847.828: 33.7942% ( 217) 00:09:16.038 5847.828 - 5873.034: 34.8633% ( 208) 00:09:16.038 5873.034 - 5898.240: 35.9735% ( 216) 00:09:16.038 5898.240 - 5923.446: 37.0837% ( 216) 00:09:16.038 5923.446 - 5948.652: 38.1425% ( 206) 00:09:16.038 5948.652 - 5973.858: 39.2681% ( 219) 00:09:16.038 5973.858 - 5999.065: 40.3577% ( 212) 00:09:16.038 5999.065 - 6024.271: 41.4576% ( 214) 00:09:16.038 6024.271 - 6049.477: 42.5421% ( 211) 00:09:16.038 6049.477 - 6074.683: 43.5907% ( 204) 00:09:16.038 6074.683 - 6099.889: 44.7060% ( 217) 00:09:16.038 6099.889 - 6125.095: 45.7802% ( 209) 00:09:16.038 6125.095 - 6150.302: 46.8596% ( 210) 00:09:16.038 6150.302 - 6175.508: 47.8978% ( 202) 00:09:16.038 6175.508 - 6200.714: 49.1262% ( 239) 00:09:16.038 6200.714 - 6225.920: 50.0874% ( 187) 00:09:16.038 6225.920 - 6251.126: 51.2438% ( 225) 00:09:16.038 6251.126 - 6276.332: 52.2564% ( 197) 00:09:16.038 6276.332 - 6301.538: 53.3820% ( 219) 00:09:16.038 6301.538 - 6326.745: 54.4973% ( 217) 00:09:16.038 6326.745 - 6351.951: 55.5715% ( 209) 00:09:16.038 6351.951 - 6377.157: 56.6920% ( 218) 00:09:16.038 6377.157 - 6402.363: 57.8125% ( 218) 00:09:16.038 6402.363 - 6427.569: 58.9278% ( 217) 00:09:16.038 6427.569 - 6452.775: 60.0278% ( 214) 00:09:16.038 6452.775 - 6503.188: 62.2481% ( 432) 00:09:16.038 6503.188 - 6553.600: 64.4377% ( 426) 00:09:16.038 6553.600 - 6604.012: 66.5964% ( 420) 00:09:16.038 6604.012 - 6654.425: 68.7808% ( 425) 00:09:16.038 6654.425 - 6704.837: 71.0526% ( 442) 00:09:16.038 6704.837 - 6755.249: 73.2730% ( 432) 00:09:16.038 6755.249 - 6805.662: 75.5243% ( 438) 00:09:16.038 6805.662 - 6856.074: 77.8115% ( 445) 00:09:16.038 6856.074 - 6906.486: 79.8417% ( 395) 00:09:16.038 6906.486 - 6956.898: 81.6149% ( 345) 00:09:16.038 6956.898 - 7007.311: 83.1466% ( 298) 00:09:16.038 7007.311 - 7057.723: 84.2671% ( 218) 00:09:16.038 7057.723 - 7108.135: 85.0997% ( 162) 00:09:16.038 7108.135 - 7158.548: 85.8039% ( 137) 00:09:16.038 7158.548 - 7208.960: 86.3744% ( 111) 00:09:16.038 7208.960 - 7259.372: 86.9295% ( 108) 00:09:16.038 7259.372 - 7309.785: 87.4280% ( 97) 00:09:16.038 7309.785 - 7360.197: 87.8289% ( 78) 00:09:16.038 7360.197 - 7410.609: 88.1116% ( 55) 00:09:16.038 7410.609 - 7461.022: 88.4252% ( 61) 00:09:16.038 7461.022 - 7511.434: 88.6976% ( 53) 00:09:16.038 7511.434 - 7561.846: 88.9391% ( 47) 00:09:16.038 7561.846 - 7612.258: 89.2013% ( 51) 00:09:16.038 7612.258 - 7662.671: 89.4428% ( 47) 00:09:16.038 7662.671 - 7713.083: 89.6587% ( 42) 00:09:16.038 7713.083 - 7763.495: 89.8694% ( 41) 00:09:16.038 7763.495 - 7813.908: 90.0545% ( 36) 00:09:16.038 7813.908 - 7864.320: 90.2087% ( 30) 00:09:16.038 7864.320 - 7914.732: 90.3834% ( 34) 00:09:16.038 7914.732 - 7965.145: 90.5222% ( 27) 00:09:16.038 7965.145 - 8015.557: 90.6970% ( 34) 00:09:16.038 8015.557 - 8065.969: 90.8563% ( 31) 00:09:16.038 8065.969 - 8116.382: 91.0208% ( 32) 00:09:16.038 8116.382 - 8166.794: 91.1493% ( 25) 00:09:16.038 8166.794 - 8217.206: 91.2880% ( 27) 00:09:16.038 8217.206 - 8267.618: 91.4165% ( 25) 00:09:16.038 8267.618 - 8318.031: 91.5604% ( 28) 00:09:16.038 8318.031 - 8368.443: 91.6992% ( 27) 00:09:16.038 8368.443 - 8418.855: 91.8483% ( 29) 00:09:16.038 8418.855 - 8469.268: 91.9922% ( 28) 00:09:16.038 8469.268 - 8519.680: 92.1310% ( 27) 00:09:16.038 8519.680 - 8570.092: 92.2749% ( 28) 00:09:16.038 8570.092 - 8620.505: 92.4342% ( 31) 00:09:16.038 8620.505 - 8670.917: 92.5987% ( 32) 00:09:16.038 8670.917 - 8721.329: 92.7580% ( 31) 00:09:16.038 8721.329 - 8771.742: 92.9276% ( 33) 00:09:16.038 8771.742 - 8822.154: 93.0818% ( 30) 00:09:16.038 8822.154 - 8872.566: 93.2309% ( 29) 00:09:16.038 8872.566 - 8922.978: 93.3697% ( 27) 00:09:16.038 8922.978 - 8973.391: 93.5238% ( 30) 00:09:16.038 8973.391 - 9023.803: 93.6678% ( 28) 00:09:16.038 9023.803 - 9074.215: 93.7860% ( 23) 00:09:16.038 9074.215 - 9124.628: 93.9093% ( 24) 00:09:16.038 9124.628 - 9175.040: 94.0481% ( 27) 00:09:16.038 9175.040 - 9225.452: 94.1612% ( 22) 00:09:16.038 9225.452 - 9275.865: 94.2897% ( 25) 00:09:16.038 9275.865 - 9326.277: 94.3925% ( 20) 00:09:16.038 9326.277 - 9376.689: 94.4953% ( 20) 00:09:16.038 9376.689 - 9427.102: 94.6186% ( 24) 00:09:16.038 9427.102 - 9477.514: 94.7060% ( 17) 00:09:16.038 9477.514 - 9527.926: 94.7882% ( 16) 00:09:16.038 9527.926 - 9578.338: 94.8602% ( 14) 00:09:16.038 9578.338 - 9628.751: 94.9373% ( 15) 00:09:16.038 9628.751 - 9679.163: 95.0144% ( 15) 00:09:16.038 9679.163 - 9729.575: 95.0709% ( 11) 00:09:16.038 9729.575 - 9779.988: 95.1532% ( 16) 00:09:16.038 9779.988 - 9830.400: 95.2251% ( 14) 00:09:16.038 9830.400 - 9880.812: 95.3125% ( 17) 00:09:16.038 9880.812 - 9931.225: 95.4102% ( 19) 00:09:16.038 9931.225 - 9981.637: 95.5130% ( 20) 00:09:16.038 9981.637 - 10032.049: 95.6106% ( 19) 00:09:16.038 10032.049 - 10082.462: 95.7083% ( 19) 00:09:16.038 10082.462 - 10132.874: 95.8008% ( 18) 00:09:16.038 10132.874 - 10183.286: 95.8984% ( 19) 00:09:16.038 10183.286 - 10233.698: 96.0064% ( 21) 00:09:16.038 10233.698 - 10284.111: 96.1040% ( 19) 00:09:16.038 10284.111 - 10334.523: 96.2171% ( 22) 00:09:16.038 10334.523 - 10384.935: 96.3353% ( 23) 00:09:16.038 10384.935 - 10435.348: 96.4227% ( 17) 00:09:16.038 10435.348 - 10485.760: 96.5152% ( 18) 00:09:16.038 10485.760 - 10536.172: 96.6077% ( 18) 00:09:16.038 10536.172 - 10586.585: 96.6848% ( 15) 00:09:16.038 10586.585 - 10636.997: 96.7568% ( 14) 00:09:16.038 10636.997 - 10687.409: 96.8287% ( 14) 00:09:16.038 10687.409 - 10737.822: 96.9058% ( 15) 00:09:16.038 10737.822 - 10788.234: 96.9675% ( 12) 00:09:16.038 10788.234 - 10838.646: 97.0498% ( 16) 00:09:16.038 10838.646 - 10889.058: 97.1166% ( 13) 00:09:16.038 10889.058 - 10939.471: 97.1937% ( 15) 00:09:16.038 10939.471 - 10989.883: 97.2656% ( 14) 00:09:16.038 10989.883 - 11040.295: 97.3427% ( 15) 00:09:16.038 11040.295 - 11090.708: 97.4095% ( 13) 00:09:16.038 11090.708 - 11141.120: 97.5021% ( 18) 00:09:16.038 11141.120 - 11191.532: 97.5637% ( 12) 00:09:16.038 11191.532 - 11241.945: 97.6357% ( 14) 00:09:16.038 11241.945 - 11292.357: 97.7128% ( 15) 00:09:16.038 11292.357 - 11342.769: 97.7899% ( 15) 00:09:16.039 11342.769 - 11393.182: 97.8618% ( 14) 00:09:16.039 11393.182 - 11443.594: 97.9389% ( 15) 00:09:16.039 11443.594 - 11494.006: 98.0006% ( 12) 00:09:16.039 11494.006 - 11544.418: 98.0572% ( 11) 00:09:16.039 11544.418 - 11594.831: 98.1188% ( 12) 00:09:16.039 11594.831 - 11645.243: 98.1754% ( 11) 00:09:16.039 11645.243 - 11695.655: 98.2165% ( 8) 00:09:16.039 11695.655 - 11746.068: 98.2627% ( 9) 00:09:16.039 11746.068 - 11796.480: 98.3039% ( 8) 00:09:16.039 11796.480 - 11846.892: 98.3501% ( 9) 00:09:16.039 11846.892 - 11897.305: 98.3861% ( 7) 00:09:16.039 11897.305 - 11947.717: 98.4324% ( 9) 00:09:16.039 11947.717 - 11998.129: 98.4735% ( 8) 00:09:16.039 11998.129 - 12048.542: 98.5043% ( 6) 00:09:16.039 12048.542 - 12098.954: 98.5352% ( 6) 00:09:16.039 12098.954 - 12149.366: 98.5557% ( 4) 00:09:16.039 12149.366 - 12199.778: 98.5866% ( 6) 00:09:16.039 12199.778 - 12250.191: 98.6071% ( 4) 00:09:16.039 12250.191 - 12300.603: 98.6380% ( 6) 00:09:16.039 12300.603 - 12351.015: 98.6637% ( 5) 00:09:16.039 12351.015 - 12401.428: 98.6842% ( 4) 00:09:16.039 12401.428 - 12451.840: 98.7202% ( 7) 00:09:16.039 12451.840 - 12502.252: 98.7407% ( 4) 00:09:16.039 12502.252 - 12552.665: 98.7716% ( 6) 00:09:16.039 12552.665 - 12603.077: 98.8024% ( 6) 00:09:16.039 12603.077 - 12653.489: 98.8230% ( 4) 00:09:16.039 12653.489 - 12703.902: 98.8538% ( 6) 00:09:16.039 12703.902 - 12754.314: 98.8795% ( 5) 00:09:16.039 12754.314 - 12804.726: 98.9052% ( 5) 00:09:16.039 12804.726 - 12855.138: 98.9206% ( 3) 00:09:16.039 12855.138 - 12905.551: 98.9566% ( 7) 00:09:16.039 12905.551 - 13006.375: 99.0080% ( 10) 00:09:16.039 13006.375 - 13107.200: 99.0646% ( 11) 00:09:16.039 13107.200 - 13208.025: 99.1160% ( 10) 00:09:16.039 13208.025 - 13308.849: 99.1674% ( 10) 00:09:16.039 13308.849 - 13409.674: 99.2188% ( 10) 00:09:16.039 13409.674 - 13510.498: 99.2547% ( 7) 00:09:16.039 13510.498 - 13611.323: 99.2856% ( 6) 00:09:16.039 13611.323 - 13712.148: 99.3061% ( 4) 00:09:16.039 13712.148 - 13812.972: 99.3267% ( 4) 00:09:16.039 13812.972 - 13913.797: 99.3421% ( 3) 00:09:16.039 26416.049 - 26617.698: 99.3627% ( 4) 00:09:16.039 26617.698 - 26819.348: 99.4038% ( 8) 00:09:16.039 26819.348 - 27020.997: 99.4398% ( 7) 00:09:16.039 27020.997 - 27222.646: 99.4757% ( 7) 00:09:16.039 27222.646 - 27424.295: 99.5169% ( 8) 00:09:16.039 27424.295 - 27625.945: 99.5580% ( 8) 00:09:16.039 27625.945 - 27827.594: 99.5991% ( 8) 00:09:16.039 27827.594 - 28029.243: 99.6402% ( 8) 00:09:16.039 28029.243 - 28230.892: 99.6813% ( 8) 00:09:16.039 28230.892 - 28432.542: 99.7173% ( 7) 00:09:16.039 28432.542 - 28634.191: 99.7584% ( 8) 00:09:16.039 28634.191 - 28835.840: 99.7944% ( 7) 00:09:16.039 28835.840 - 29037.489: 99.8355% ( 8) 00:09:16.039 29037.489 - 29239.138: 99.8561% ( 4) 00:09:16.039 29239.138 - 29440.788: 99.8972% ( 8) 00:09:16.039 29440.788 - 29642.437: 99.9383% ( 8) 00:09:16.039 29642.437 - 29844.086: 99.9794% ( 8) 00:09:16.039 29844.086 - 30045.735: 100.0000% ( 4) 00:09:16.039 00:09:16.039 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:16.039 ============================================================================== 00:09:16.039 Range in us Cumulative IO count 00:09:16.039 4587.520 - 4612.726: 0.0051% ( 1) 00:09:16.039 4612.726 - 4637.932: 0.0360% ( 6) 00:09:16.039 4637.932 - 4663.138: 0.0514% ( 3) 00:09:16.039 4663.138 - 4688.345: 0.0565% ( 1) 00:09:16.039 4688.345 - 4713.551: 0.0617% ( 1) 00:09:16.039 4713.551 - 4738.757: 0.0668% ( 1) 00:09:16.039 4738.757 - 4763.963: 0.0771% ( 2) 00:09:16.039 4763.963 - 4789.169: 0.0874% ( 2) 00:09:16.039 4789.169 - 4814.375: 0.0977% ( 2) 00:09:16.039 4814.375 - 4839.582: 0.1028% ( 1) 00:09:16.039 4839.582 - 4864.788: 0.1131% ( 2) 00:09:16.039 4864.788 - 4889.994: 0.1388% ( 5) 00:09:16.039 4889.994 - 4915.200: 0.1542% ( 3) 00:09:16.039 4915.200 - 4940.406: 0.1902% ( 7) 00:09:16.039 4940.406 - 4965.612: 0.2210% ( 6) 00:09:16.039 4965.612 - 4990.818: 0.2519% ( 6) 00:09:16.039 4990.818 - 5016.025: 0.2981% ( 9) 00:09:16.039 5016.025 - 5041.231: 0.3803% ( 16) 00:09:16.039 5041.231 - 5066.437: 0.4986% ( 23) 00:09:16.039 5066.437 - 5091.643: 0.7556% ( 50) 00:09:16.039 5091.643 - 5116.849: 1.0948% ( 66) 00:09:16.039 5116.849 - 5142.055: 1.5419% ( 87) 00:09:16.039 5142.055 - 5167.262: 2.1073% ( 110) 00:09:16.039 5167.262 - 5192.468: 2.8372% ( 142) 00:09:16.039 5192.468 - 5217.674: 3.6184% ( 152) 00:09:16.039 5217.674 - 5242.880: 4.5436% ( 180) 00:09:16.039 5242.880 - 5268.086: 5.5407% ( 194) 00:09:16.039 5268.086 - 5293.292: 6.5841% ( 203) 00:09:16.039 5293.292 - 5318.498: 7.5966% ( 197) 00:09:16.039 5318.498 - 5343.705: 8.6040% ( 196) 00:09:16.039 5343.705 - 5368.911: 9.6525% ( 204) 00:09:16.039 5368.911 - 5394.117: 10.7422% ( 212) 00:09:16.039 5394.117 - 5419.323: 11.8267% ( 211) 00:09:16.039 5419.323 - 5444.529: 12.9831% ( 225) 00:09:16.039 5444.529 - 5469.735: 14.0831% ( 214) 00:09:16.039 5469.735 - 5494.942: 15.2087% ( 219) 00:09:16.039 5494.942 - 5520.148: 16.3960% ( 231) 00:09:16.039 5520.148 - 5545.354: 17.6192% ( 238) 00:09:16.039 5545.354 - 5570.560: 18.8168% ( 233) 00:09:16.039 5570.560 - 5595.766: 20.0144% ( 233) 00:09:16.039 5595.766 - 5620.972: 21.2428% ( 239) 00:09:16.039 5620.972 - 5646.178: 22.4507% ( 235) 00:09:16.039 5646.178 - 5671.385: 23.7099% ( 245) 00:09:16.039 5671.385 - 5696.591: 24.9280% ( 237) 00:09:16.039 5696.591 - 5721.797: 26.1359% ( 235) 00:09:16.039 5721.797 - 5747.003: 27.3283% ( 232) 00:09:16.039 5747.003 - 5772.209: 28.5362% ( 235) 00:09:16.039 5772.209 - 5797.415: 29.7389% ( 234) 00:09:16.039 5797.415 - 5822.622: 30.9519% ( 236) 00:09:16.039 5822.622 - 5847.828: 32.1957% ( 242) 00:09:16.039 5847.828 - 5873.034: 33.4601% ( 246) 00:09:16.039 5873.034 - 5898.240: 34.7039% ( 242) 00:09:16.039 5898.240 - 5923.446: 35.9529% ( 243) 00:09:16.039 5923.446 - 5948.652: 37.2122% ( 245) 00:09:16.039 5948.652 - 5973.858: 38.4817% ( 247) 00:09:16.039 5973.858 - 5999.065: 39.7564% ( 248) 00:09:16.039 5999.065 - 6024.271: 41.0413% ( 250) 00:09:16.039 6024.271 - 6049.477: 42.3006% ( 245) 00:09:16.039 6049.477 - 6074.683: 43.5598% ( 245) 00:09:16.039 6074.683 - 6099.889: 44.8139% ( 244) 00:09:16.039 6099.889 - 6125.095: 46.0526% ( 241) 00:09:16.039 6125.095 - 6150.302: 47.3170% ( 246) 00:09:16.039 6150.302 - 6175.508: 48.5506% ( 240) 00:09:16.039 6175.508 - 6200.714: 49.7995% ( 243) 00:09:16.039 6200.714 - 6225.920: 51.0691% ( 247) 00:09:16.039 6225.920 - 6251.126: 52.3129% ( 242) 00:09:16.039 6251.126 - 6276.332: 53.5156% ( 234) 00:09:16.039 6276.332 - 6301.538: 54.7492% ( 240) 00:09:16.039 6301.538 - 6326.745: 56.0187% ( 247) 00:09:16.039 6326.745 - 6351.951: 57.2728% ( 244) 00:09:16.039 6351.951 - 6377.157: 58.5629% ( 251) 00:09:16.039 6377.157 - 6402.363: 59.8581% ( 252) 00:09:16.039 6402.363 - 6427.569: 61.1688% ( 255) 00:09:16.039 6427.569 - 6452.775: 62.4075% ( 241) 00:09:16.039 6452.775 - 6503.188: 64.9979% ( 504) 00:09:16.039 6503.188 - 6553.600: 67.6398% ( 514) 00:09:16.039 6553.600 - 6604.012: 70.1840% ( 495) 00:09:16.039 6604.012 - 6654.425: 72.7745% ( 504) 00:09:16.039 6654.425 - 6704.837: 75.3341% ( 498) 00:09:16.039 6704.837 - 6755.249: 77.7138% ( 463) 00:09:16.039 6755.249 - 6805.662: 79.7595% ( 398) 00:09:16.039 6805.662 - 6856.074: 81.4916% ( 337) 00:09:16.039 6856.074 - 6906.486: 82.7405% ( 243) 00:09:16.039 6906.486 - 6956.898: 83.5732% ( 162) 00:09:16.039 6956.898 - 7007.311: 84.3133% ( 144) 00:09:16.039 7007.311 - 7057.723: 84.9558% ( 125) 00:09:16.039 7057.723 - 7108.135: 85.5417% ( 114) 00:09:16.039 7108.135 - 7158.548: 86.0866% ( 106) 00:09:16.039 7158.548 - 7208.960: 86.5337% ( 87) 00:09:16.039 7208.960 - 7259.372: 86.8935% ( 70) 00:09:16.039 7259.372 - 7309.785: 87.2070% ( 61) 00:09:16.039 7309.785 - 7360.197: 87.4537% ( 48) 00:09:16.039 7360.197 - 7410.609: 87.7262% ( 53) 00:09:16.039 7410.609 - 7461.022: 88.0088% ( 55) 00:09:16.039 7461.022 - 7511.434: 88.3069% ( 58) 00:09:16.039 7511.434 - 7561.846: 88.5382% ( 45) 00:09:16.039 7561.846 - 7612.258: 88.7541% ( 42) 00:09:16.039 7612.258 - 7662.671: 88.9597% ( 40) 00:09:16.039 7662.671 - 7713.083: 89.1293% ( 33) 00:09:16.039 7713.083 - 7763.495: 89.3606% ( 45) 00:09:16.039 7763.495 - 7813.908: 89.5611% ( 39) 00:09:16.039 7813.908 - 7864.320: 89.7564% ( 38) 00:09:16.039 7864.320 - 7914.732: 89.9722% ( 42) 00:09:16.039 7914.732 - 7965.145: 90.1830% ( 41) 00:09:16.039 7965.145 - 8015.557: 90.4091% ( 44) 00:09:16.039 8015.557 - 8065.969: 90.6044% ( 38) 00:09:16.039 8065.969 - 8116.382: 90.8203% ( 42) 00:09:16.039 8116.382 - 8166.794: 91.0310% ( 41) 00:09:16.039 8166.794 - 8217.206: 91.2572% ( 44) 00:09:16.039 8217.206 - 8267.618: 91.4885% ( 45) 00:09:16.039 8267.618 - 8318.031: 91.7095% ( 43) 00:09:16.039 8318.031 - 8368.443: 91.9254% ( 42) 00:09:16.039 8368.443 - 8418.855: 92.1104% ( 36) 00:09:16.039 8418.855 - 8469.268: 92.3057% ( 38) 00:09:16.039 8469.268 - 8519.680: 92.5010% ( 38) 00:09:16.039 8519.680 - 8570.092: 92.6706% ( 33) 00:09:16.039 8570.092 - 8620.505: 92.8300% ( 31) 00:09:16.039 8620.505 - 8670.917: 92.9739% ( 28) 00:09:16.039 8670.917 - 8721.329: 93.1127% ( 27) 00:09:16.039 8721.329 - 8771.742: 93.2566% ( 28) 00:09:16.039 8771.742 - 8822.154: 93.3851% ( 25) 00:09:16.039 8822.154 - 8872.566: 93.5290% ( 28) 00:09:16.040 8872.566 - 8922.978: 93.6626% ( 26) 00:09:16.040 8922.978 - 8973.391: 93.7860% ( 24) 00:09:16.040 8973.391 - 9023.803: 93.9196% ( 26) 00:09:16.040 9023.803 - 9074.215: 94.0378% ( 23) 00:09:16.040 9074.215 - 9124.628: 94.1612% ( 24) 00:09:16.040 9124.628 - 9175.040: 94.2845% ( 24) 00:09:16.040 9175.040 - 9225.452: 94.4079% ( 24) 00:09:16.040 9225.452 - 9275.865: 94.5724% ( 32) 00:09:16.040 9275.865 - 9326.277: 94.7317% ( 31) 00:09:16.040 9326.277 - 9376.689: 94.8910% ( 31) 00:09:16.040 9376.689 - 9427.102: 95.0658% ( 34) 00:09:16.040 9427.102 - 9477.514: 95.2148% ( 29) 00:09:16.040 9477.514 - 9527.926: 95.3382% ( 24) 00:09:16.040 9527.926 - 9578.338: 95.4616% ( 24) 00:09:16.040 9578.338 - 9628.751: 95.5695% ( 21) 00:09:16.040 9628.751 - 9679.163: 95.6466% ( 15) 00:09:16.040 9679.163 - 9729.575: 95.7237% ( 15) 00:09:16.040 9729.575 - 9779.988: 95.8008% ( 15) 00:09:16.040 9779.988 - 9830.400: 95.8676% ( 13) 00:09:16.040 9830.400 - 9880.812: 95.9241% ( 11) 00:09:16.040 9880.812 - 9931.225: 95.9858% ( 12) 00:09:16.040 9931.225 - 9981.637: 96.0424% ( 11) 00:09:16.040 9981.637 - 10032.049: 96.1194% ( 15) 00:09:16.040 10032.049 - 10082.462: 96.2017% ( 16) 00:09:16.040 10082.462 - 10132.874: 96.2736% ( 14) 00:09:16.040 10132.874 - 10183.286: 96.3456% ( 14) 00:09:16.040 10183.286 - 10233.698: 96.4176% ( 14) 00:09:16.040 10233.698 - 10284.111: 96.4792% ( 12) 00:09:16.040 10284.111 - 10334.523: 96.5409% ( 12) 00:09:16.040 10334.523 - 10384.935: 96.6077% ( 13) 00:09:16.040 10384.935 - 10435.348: 96.6694% ( 12) 00:09:16.040 10435.348 - 10485.760: 96.7465% ( 15) 00:09:16.040 10485.760 - 10536.172: 96.8185% ( 14) 00:09:16.040 10536.172 - 10586.585: 96.8956% ( 15) 00:09:16.040 10586.585 - 10636.997: 96.9727% ( 15) 00:09:16.040 10636.997 - 10687.409: 97.0446% ( 14) 00:09:16.040 10687.409 - 10737.822: 97.1269% ( 16) 00:09:16.040 10737.822 - 10788.234: 97.2091% ( 16) 00:09:16.040 10788.234 - 10838.646: 97.2810% ( 14) 00:09:16.040 10838.646 - 10889.058: 97.3530% ( 14) 00:09:16.040 10889.058 - 10939.471: 97.4352% ( 16) 00:09:16.040 10939.471 - 10989.883: 97.5072% ( 14) 00:09:16.040 10989.883 - 11040.295: 97.5586% ( 10) 00:09:16.040 11040.295 - 11090.708: 97.6049% ( 9) 00:09:16.040 11090.708 - 11141.120: 97.6408% ( 7) 00:09:16.040 11141.120 - 11191.532: 97.6768% ( 7) 00:09:16.040 11191.532 - 11241.945: 97.7231% ( 9) 00:09:16.040 11241.945 - 11292.357: 97.7590% ( 7) 00:09:16.040 11292.357 - 11342.769: 97.7847% ( 5) 00:09:16.040 11342.769 - 11393.182: 97.8104% ( 5) 00:09:16.040 11393.182 - 11443.594: 97.8310% ( 4) 00:09:16.040 11443.594 - 11494.006: 97.8516% ( 4) 00:09:16.040 11494.006 - 11544.418: 97.8721% ( 4) 00:09:16.040 11544.418 - 11594.831: 97.8978% ( 5) 00:09:16.040 11594.831 - 11645.243: 97.9235% ( 5) 00:09:16.040 11645.243 - 11695.655: 97.9595% ( 7) 00:09:16.040 11695.655 - 11746.068: 97.9903% ( 6) 00:09:16.040 11746.068 - 11796.480: 98.0212% ( 6) 00:09:16.040 11796.480 - 11846.892: 98.0520% ( 6) 00:09:16.040 11846.892 - 11897.305: 98.0829% ( 6) 00:09:16.040 11897.305 - 11947.717: 98.1086% ( 5) 00:09:16.040 11947.717 - 11998.129: 98.1394% ( 6) 00:09:16.040 11998.129 - 12048.542: 98.1754% ( 7) 00:09:16.040 12048.542 - 12098.954: 98.2113% ( 7) 00:09:16.040 12098.954 - 12149.366: 98.2370% ( 5) 00:09:16.040 12149.366 - 12199.778: 98.2730% ( 7) 00:09:16.040 12199.778 - 12250.191: 98.3039% ( 6) 00:09:16.040 12250.191 - 12300.603: 98.3347% ( 6) 00:09:16.040 12300.603 - 12351.015: 98.3655% ( 6) 00:09:16.040 12351.015 - 12401.428: 98.3964% ( 6) 00:09:16.040 12401.428 - 12451.840: 98.4272% ( 6) 00:09:16.040 12451.840 - 12502.252: 98.4581% ( 6) 00:09:16.040 12502.252 - 12552.665: 98.4940% ( 7) 00:09:16.040 12552.665 - 12603.077: 98.5300% ( 7) 00:09:16.040 12603.077 - 12653.489: 98.5609% ( 6) 00:09:16.040 12653.489 - 12703.902: 98.5866% ( 5) 00:09:16.040 12703.902 - 12754.314: 98.6225% ( 7) 00:09:16.040 12754.314 - 12804.726: 98.6534% ( 6) 00:09:16.040 12804.726 - 12855.138: 98.6842% ( 6) 00:09:16.040 12855.138 - 12905.551: 98.7202% ( 7) 00:09:16.040 12905.551 - 13006.375: 98.7870% ( 13) 00:09:16.040 13006.375 - 13107.200: 98.8487% ( 12) 00:09:16.040 13107.200 - 13208.025: 98.9052% ( 11) 00:09:16.040 13208.025 - 13308.849: 98.9618% ( 11) 00:09:16.040 13308.849 - 13409.674: 99.0080% ( 9) 00:09:16.040 13409.674 - 13510.498: 99.0440% ( 7) 00:09:16.040 13510.498 - 13611.323: 99.0851% ( 8) 00:09:16.040 13611.323 - 13712.148: 99.1160% ( 6) 00:09:16.040 13712.148 - 13812.972: 99.1468% ( 6) 00:09:16.040 13812.972 - 13913.797: 99.1725% ( 5) 00:09:16.040 13913.797 - 14014.622: 99.1982% ( 5) 00:09:16.040 14014.622 - 14115.446: 99.2188% ( 4) 00:09:16.040 14115.446 - 14216.271: 99.2393% ( 4) 00:09:16.040 14216.271 - 14317.095: 99.2599% ( 4) 00:09:16.040 14317.095 - 14417.920: 99.2804% ( 4) 00:09:16.040 14417.920 - 14518.745: 99.3010% ( 4) 00:09:16.040 14518.745 - 14619.569: 99.3215% ( 4) 00:09:16.040 14619.569 - 14720.394: 99.3421% ( 4) 00:09:16.040 25306.978 - 25407.803: 99.3575% ( 3) 00:09:16.040 25407.803 - 25508.628: 99.3832% ( 5) 00:09:16.040 25508.628 - 25609.452: 99.4038% ( 4) 00:09:16.040 25609.452 - 25710.277: 99.4243% ( 4) 00:09:16.040 25710.277 - 25811.102: 99.4449% ( 4) 00:09:16.040 25811.102 - 26012.751: 99.4912% ( 9) 00:09:16.040 26012.751 - 26214.400: 99.5323% ( 8) 00:09:16.040 26214.400 - 26416.049: 99.5734% ( 8) 00:09:16.040 26416.049 - 26617.698: 99.6197% ( 9) 00:09:16.040 26617.698 - 26819.348: 99.6608% ( 8) 00:09:16.040 26819.348 - 27020.997: 99.7070% ( 9) 00:09:16.040 27020.997 - 27222.646: 99.7430% ( 7) 00:09:16.040 27222.646 - 27424.295: 99.7893% ( 9) 00:09:16.040 27424.295 - 27625.945: 99.8355% ( 9) 00:09:16.040 27625.945 - 27827.594: 99.8766% ( 8) 00:09:16.040 27827.594 - 28029.243: 99.9178% ( 8) 00:09:16.040 28029.243 - 28230.892: 99.9640% ( 9) 00:09:16.040 28230.892 - 28432.542: 100.0000% ( 7) 00:09:16.040 00:09:16.040 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:16.040 ============================================================================== 00:09:16.040 Range in us Cumulative IO count 00:09:16.040 4965.612 - 4990.818: 0.0103% ( 2) 00:09:16.040 4990.818 - 5016.025: 0.0463% ( 7) 00:09:16.040 5016.025 - 5041.231: 0.1131% ( 13) 00:09:16.040 5041.231 - 5066.437: 0.2210% ( 21) 00:09:16.040 5066.437 - 5091.643: 0.3598% ( 27) 00:09:16.040 5091.643 - 5116.849: 0.6528% ( 57) 00:09:16.040 5116.849 - 5142.055: 0.9817% ( 64) 00:09:16.040 5142.055 - 5167.262: 1.5008% ( 101) 00:09:16.040 5167.262 - 5192.468: 2.3489% ( 165) 00:09:16.040 5192.468 - 5217.674: 3.2381% ( 173) 00:09:16.040 5217.674 - 5242.880: 4.0861% ( 165) 00:09:16.040 5242.880 - 5268.086: 5.1244% ( 202) 00:09:16.040 5268.086 - 5293.292: 6.1780% ( 205) 00:09:16.040 5293.292 - 5318.498: 7.2163% ( 202) 00:09:16.040 5318.498 - 5343.705: 8.3111% ( 213) 00:09:16.040 5343.705 - 5368.911: 9.4521% ( 222) 00:09:16.040 5368.911 - 5394.117: 10.5623% ( 216) 00:09:16.040 5394.117 - 5419.323: 11.7342% ( 228) 00:09:16.040 5419.323 - 5444.529: 12.9369% ( 234) 00:09:16.040 5444.529 - 5469.735: 14.1036% ( 227) 00:09:16.040 5469.735 - 5494.942: 15.2755% ( 228) 00:09:16.040 5494.942 - 5520.148: 16.4422% ( 227) 00:09:16.040 5520.148 - 5545.354: 17.5987% ( 225) 00:09:16.040 5545.354 - 5570.560: 18.7757% ( 229) 00:09:16.040 5570.560 - 5595.766: 19.9938% ( 237) 00:09:16.040 5595.766 - 5620.972: 21.1965% ( 234) 00:09:16.040 5620.972 - 5646.178: 22.4198% ( 238) 00:09:16.040 5646.178 - 5671.385: 23.6328% ( 236) 00:09:16.040 5671.385 - 5696.591: 24.8304% ( 233) 00:09:16.040 5696.591 - 5721.797: 26.0588% ( 239) 00:09:16.040 5721.797 - 5747.003: 27.2769% ( 237) 00:09:16.040 5747.003 - 5772.209: 28.4899% ( 236) 00:09:16.040 5772.209 - 5797.415: 29.7286% ( 241) 00:09:16.040 5797.415 - 5822.622: 30.9570% ( 239) 00:09:16.040 5822.622 - 5847.828: 32.1854% ( 239) 00:09:16.040 5847.828 - 5873.034: 33.4087% ( 238) 00:09:16.040 5873.034 - 5898.240: 34.6628% ( 244) 00:09:16.040 5898.240 - 5923.446: 35.9118% ( 243) 00:09:16.040 5923.446 - 5948.652: 37.1556% ( 242) 00:09:16.040 5948.652 - 5973.858: 38.4303% ( 248) 00:09:16.040 5973.858 - 5999.065: 39.6947% ( 246) 00:09:16.040 5999.065 - 6024.271: 40.9642% ( 247) 00:09:16.040 6024.271 - 6049.477: 42.2440% ( 249) 00:09:16.040 6049.477 - 6074.683: 43.5238% ( 249) 00:09:16.040 6074.683 - 6099.889: 44.7985% ( 248) 00:09:16.040 6099.889 - 6125.095: 46.0732% ( 248) 00:09:16.040 6125.095 - 6150.302: 47.3736% ( 253) 00:09:16.040 6150.302 - 6175.508: 48.6637% ( 251) 00:09:16.040 6175.508 - 6200.714: 49.9486% ( 250) 00:09:16.040 6200.714 - 6225.920: 51.2336% ( 250) 00:09:16.040 6225.920 - 6251.126: 52.5339% ( 253) 00:09:16.040 6251.126 - 6276.332: 53.8549% ( 257) 00:09:16.040 6276.332 - 6301.538: 55.1604% ( 254) 00:09:16.040 6301.538 - 6326.745: 56.5173% ( 264) 00:09:16.040 6326.745 - 6351.951: 57.8382% ( 257) 00:09:16.040 6351.951 - 6377.157: 59.1488% ( 255) 00:09:16.040 6377.157 - 6402.363: 60.4852% ( 260) 00:09:16.040 6402.363 - 6427.569: 61.7856% ( 253) 00:09:16.040 6427.569 - 6452.775: 63.1168% ( 259) 00:09:16.040 6452.775 - 6503.188: 65.7638% ( 515) 00:09:16.040 6503.188 - 6553.600: 68.4622% ( 525) 00:09:16.040 6553.600 - 6604.012: 71.1143% ( 516) 00:09:16.040 6604.012 - 6654.425: 73.7253% ( 508) 00:09:16.040 6654.425 - 6704.837: 76.2798% ( 497) 00:09:16.040 6704.837 - 6755.249: 78.5567% ( 443) 00:09:16.040 6755.249 - 6805.662: 80.6538% ( 408) 00:09:16.040 6805.662 - 6856.074: 82.4219% ( 344) 00:09:16.040 6856.074 - 6906.486: 83.6503% ( 239) 00:09:16.040 6906.486 - 6956.898: 84.5498% ( 175) 00:09:16.040 6956.898 - 7007.311: 85.3207% ( 150) 00:09:16.040 7007.311 - 7057.723: 85.9838% ( 129) 00:09:16.040 7057.723 - 7108.135: 86.6262% ( 125) 00:09:16.040 7108.135 - 7158.548: 87.1659% ( 105) 00:09:16.040 7158.548 - 7208.960: 87.5668% ( 78) 00:09:16.040 7208.960 - 7259.372: 87.9215% ( 69) 00:09:16.040 7259.372 - 7309.785: 88.2401% ( 62) 00:09:16.041 7309.785 - 7360.197: 88.4611% ( 43) 00:09:16.041 7360.197 - 7410.609: 88.6719% ( 41) 00:09:16.041 7410.609 - 7461.022: 88.8826% ( 41) 00:09:16.041 7461.022 - 7511.434: 89.0779% ( 38) 00:09:16.041 7511.434 - 7561.846: 89.2835% ( 40) 00:09:16.041 7561.846 - 7612.258: 89.4685% ( 36) 00:09:16.041 7612.258 - 7662.671: 89.6382% ( 33) 00:09:16.041 7662.671 - 7713.083: 89.8078% ( 33) 00:09:16.041 7713.083 - 7763.495: 89.9877% ( 35) 00:09:16.041 7763.495 - 7813.908: 90.1573% ( 33) 00:09:16.041 7813.908 - 7864.320: 90.3218% ( 32) 00:09:16.041 7864.320 - 7914.732: 90.4862% ( 32) 00:09:16.041 7914.732 - 7965.145: 90.6353% ( 29) 00:09:16.041 7965.145 - 8015.557: 90.7586% ( 24) 00:09:16.041 8015.557 - 8065.969: 90.8871% ( 25) 00:09:16.041 8065.969 - 8116.382: 91.0105% ( 24) 00:09:16.041 8116.382 - 8166.794: 91.1287% ( 23) 00:09:16.041 8166.794 - 8217.206: 91.2675% ( 27) 00:09:16.041 8217.206 - 8267.618: 91.4268% ( 31) 00:09:16.041 8267.618 - 8318.031: 91.5707% ( 28) 00:09:16.041 8318.031 - 8368.443: 91.7249% ( 30) 00:09:16.041 8368.443 - 8418.855: 91.8945% ( 33) 00:09:16.041 8418.855 - 8469.268: 92.0384% ( 28) 00:09:16.041 8469.268 - 8519.680: 92.1926% ( 30) 00:09:16.041 8519.680 - 8570.092: 92.3366% ( 28) 00:09:16.041 8570.092 - 8620.505: 92.4753% ( 27) 00:09:16.041 8620.505 - 8670.917: 92.6090% ( 26) 00:09:16.041 8670.917 - 8721.329: 92.7580% ( 29) 00:09:16.041 8721.329 - 8771.742: 92.9019% ( 28) 00:09:16.041 8771.742 - 8822.154: 93.0510% ( 29) 00:09:16.041 8822.154 - 8872.566: 93.2103% ( 31) 00:09:16.041 8872.566 - 8922.978: 93.3645% ( 30) 00:09:16.041 8922.978 - 8973.391: 93.5187% ( 30) 00:09:16.041 8973.391 - 9023.803: 93.6575% ( 27) 00:09:16.041 9023.803 - 9074.215: 93.7911% ( 26) 00:09:16.041 9074.215 - 9124.628: 93.9248% ( 26) 00:09:16.041 9124.628 - 9175.040: 94.0481% ( 24) 00:09:16.041 9175.040 - 9225.452: 94.1766% ( 25) 00:09:16.041 9225.452 - 9275.865: 94.2794% ( 20) 00:09:16.041 9275.865 - 9326.277: 94.4130% ( 26) 00:09:16.041 9326.277 - 9376.689: 94.5467% ( 26) 00:09:16.041 9376.689 - 9427.102: 94.6854% ( 27) 00:09:16.041 9427.102 - 9477.514: 94.8139% ( 25) 00:09:16.041 9477.514 - 9527.926: 94.9373% ( 24) 00:09:16.041 9527.926 - 9578.338: 95.0658% ( 25) 00:09:16.041 9578.338 - 9628.751: 95.1994% ( 26) 00:09:16.041 9628.751 - 9679.163: 95.3125% ( 22) 00:09:16.041 9679.163 - 9729.575: 95.4410% ( 25) 00:09:16.041 9729.575 - 9779.988: 95.5489% ( 21) 00:09:16.041 9779.988 - 9830.400: 95.6671% ( 23) 00:09:16.041 9830.400 - 9880.812: 95.7802% ( 22) 00:09:16.041 9880.812 - 9931.225: 95.9036% ( 24) 00:09:16.041 9931.225 - 9981.637: 96.0064% ( 20) 00:09:16.041 9981.637 - 10032.049: 96.1143% ( 21) 00:09:16.041 10032.049 - 10082.462: 96.2017% ( 17) 00:09:16.041 10082.462 - 10132.874: 96.2736% ( 14) 00:09:16.041 10132.874 - 10183.286: 96.3507% ( 15) 00:09:16.041 10183.286 - 10233.698: 96.4278% ( 15) 00:09:16.041 10233.698 - 10284.111: 96.5049% ( 15) 00:09:16.041 10284.111 - 10334.523: 96.5769% ( 14) 00:09:16.041 10334.523 - 10384.935: 96.6488% ( 14) 00:09:16.041 10384.935 - 10435.348: 96.7208% ( 14) 00:09:16.041 10435.348 - 10485.760: 96.7773% ( 11) 00:09:16.041 10485.760 - 10536.172: 96.8544% ( 15) 00:09:16.041 10536.172 - 10586.585: 96.9264% ( 14) 00:09:16.041 10586.585 - 10636.997: 96.9932% ( 13) 00:09:16.041 10636.997 - 10687.409: 97.0703% ( 15) 00:09:16.041 10687.409 - 10737.822: 97.1320% ( 12) 00:09:16.041 10737.822 - 10788.234: 97.1988% ( 13) 00:09:16.041 10788.234 - 10838.646: 97.2708% ( 14) 00:09:16.041 10838.646 - 10889.058: 97.3376% ( 13) 00:09:16.041 10889.058 - 10939.471: 97.4095% ( 14) 00:09:16.041 10939.471 - 10989.883: 97.4712% ( 12) 00:09:16.041 10989.883 - 11040.295: 97.5123% ( 8) 00:09:16.041 11040.295 - 11090.708: 97.5740% ( 12) 00:09:16.041 11090.708 - 11141.120: 97.6151% ( 8) 00:09:16.041 11141.120 - 11191.532: 97.6562% ( 8) 00:09:16.041 11191.532 - 11241.945: 97.6974% ( 8) 00:09:16.041 11241.945 - 11292.357: 97.7385% ( 8) 00:09:16.041 11292.357 - 11342.769: 97.7745% ( 7) 00:09:16.041 11342.769 - 11393.182: 97.7950% ( 4) 00:09:16.041 11393.182 - 11443.594: 97.8156% ( 4) 00:09:16.041 11443.594 - 11494.006: 97.8361% ( 4) 00:09:16.041 11494.006 - 11544.418: 97.8567% ( 4) 00:09:16.041 11544.418 - 11594.831: 97.8824% ( 5) 00:09:16.041 11594.831 - 11645.243: 97.9030% ( 4) 00:09:16.041 11645.243 - 11695.655: 97.9235% ( 4) 00:09:16.041 11695.655 - 11746.068: 97.9441% ( 4) 00:09:16.041 11746.068 - 11796.480: 97.9646% ( 4) 00:09:16.041 11796.480 - 11846.892: 97.9801% ( 3) 00:09:16.041 11846.892 - 11897.305: 98.0058% ( 5) 00:09:16.041 11897.305 - 11947.717: 98.0263% ( 4) 00:09:16.041 11947.717 - 11998.129: 98.0469% ( 4) 00:09:16.041 11998.129 - 12048.542: 98.0674% ( 4) 00:09:16.041 12048.542 - 12098.954: 98.0931% ( 5) 00:09:16.041 12098.954 - 12149.366: 98.1137% ( 4) 00:09:16.041 12149.366 - 12199.778: 98.1343% ( 4) 00:09:16.041 12199.778 - 12250.191: 98.1497% ( 3) 00:09:16.041 12250.191 - 12300.603: 98.1702% ( 4) 00:09:16.041 12300.603 - 12351.015: 98.1959% ( 5) 00:09:16.041 12351.015 - 12401.428: 98.2216% ( 5) 00:09:16.041 12401.428 - 12451.840: 98.2525% ( 6) 00:09:16.041 12451.840 - 12502.252: 98.2833% ( 6) 00:09:16.041 12502.252 - 12552.665: 98.3141% ( 6) 00:09:16.041 12552.665 - 12603.077: 98.3450% ( 6) 00:09:16.041 12603.077 - 12653.489: 98.3758% ( 6) 00:09:16.041 12653.489 - 12703.902: 98.4067% ( 6) 00:09:16.041 12703.902 - 12754.314: 98.4375% ( 6) 00:09:16.041 12754.314 - 12804.726: 98.4735% ( 7) 00:09:16.041 12804.726 - 12855.138: 98.4992% ( 5) 00:09:16.041 12855.138 - 12905.551: 98.5249% ( 5) 00:09:16.041 12905.551 - 13006.375: 98.5660% ( 8) 00:09:16.041 13006.375 - 13107.200: 98.6071% ( 8) 00:09:16.041 13107.200 - 13208.025: 98.6482% ( 8) 00:09:16.041 13208.025 - 13308.849: 98.6894% ( 8) 00:09:16.041 13308.849 - 13409.674: 98.7356% ( 9) 00:09:16.041 13409.674 - 13510.498: 98.7767% ( 8) 00:09:16.041 13510.498 - 13611.323: 98.8127% ( 7) 00:09:16.041 13611.323 - 13712.148: 98.8538% ( 8) 00:09:16.041 13712.148 - 13812.972: 98.8949% ( 8) 00:09:16.041 13812.972 - 13913.797: 98.9309% ( 7) 00:09:16.041 13913.797 - 14014.622: 98.9669% ( 7) 00:09:16.041 14014.622 - 14115.446: 98.9977% ( 6) 00:09:16.041 14115.446 - 14216.271: 99.0389% ( 8) 00:09:16.041 14216.271 - 14317.095: 99.0748% ( 7) 00:09:16.041 14317.095 - 14417.920: 99.1057% ( 6) 00:09:16.041 14417.920 - 14518.745: 99.1314% ( 5) 00:09:16.041 14518.745 - 14619.569: 99.1571% ( 5) 00:09:16.041 14619.569 - 14720.394: 99.1776% ( 4) 00:09:16.041 14720.394 - 14821.218: 99.1931% ( 3) 00:09:16.041 14821.218 - 14922.043: 99.2085% ( 3) 00:09:16.041 14922.043 - 15022.868: 99.2290% ( 4) 00:09:16.041 15022.868 - 15123.692: 99.2547% ( 5) 00:09:16.041 15123.692 - 15224.517: 99.2753% ( 4) 00:09:16.041 15224.517 - 15325.342: 99.2958% ( 4) 00:09:16.041 15325.342 - 15426.166: 99.3164% ( 4) 00:09:16.041 15426.166 - 15526.991: 99.3370% ( 4) 00:09:16.041 15526.991 - 15627.815: 99.3421% ( 1) 00:09:16.041 24601.206 - 24702.031: 99.3472% ( 1) 00:09:16.041 24702.031 - 24802.855: 99.3678% ( 4) 00:09:16.041 24802.855 - 24903.680: 99.3935% ( 5) 00:09:16.041 24903.680 - 25004.505: 99.4141% ( 4) 00:09:16.041 25004.505 - 25105.329: 99.4346% ( 4) 00:09:16.041 25105.329 - 25206.154: 99.4552% ( 4) 00:09:16.041 25206.154 - 25306.978: 99.4809% ( 5) 00:09:16.041 25306.978 - 25407.803: 99.5014% ( 4) 00:09:16.041 25407.803 - 25508.628: 99.5220% ( 4) 00:09:16.041 25508.628 - 25609.452: 99.5426% ( 4) 00:09:16.041 25609.452 - 25710.277: 99.5683% ( 5) 00:09:16.041 25710.277 - 25811.102: 99.5888% ( 4) 00:09:16.041 25811.102 - 26012.751: 99.6299% ( 8) 00:09:16.042 26012.751 - 26214.400: 99.6711% ( 8) 00:09:16.042 26214.400 - 26416.049: 99.7173% ( 9) 00:09:16.042 26416.049 - 26617.698: 99.7584% ( 8) 00:09:16.042 26617.698 - 26819.348: 99.7995% ( 8) 00:09:16.042 26819.348 - 27020.997: 99.8458% ( 9) 00:09:16.042 27020.997 - 27222.646: 99.8869% ( 8) 00:09:16.042 27222.646 - 27424.295: 99.9280% ( 8) 00:09:16.042 27424.295 - 27625.945: 99.9743% ( 9) 00:09:16.042 27625.945 - 27827.594: 100.0000% ( 5) 00:09:16.042 00:09:16.042 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:16.042 ============================================================================== 00:09:16.042 Range in us Cumulative IO count 00:09:16.042 4965.612 - 4990.818: 0.0154% ( 3) 00:09:16.042 4990.818 - 5016.025: 0.0514% ( 7) 00:09:16.042 5016.025 - 5041.231: 0.0771% ( 5) 00:09:16.042 5041.231 - 5066.437: 0.1850% ( 21) 00:09:16.042 5066.437 - 5091.643: 0.3238% ( 27) 00:09:16.042 5091.643 - 5116.849: 0.5654% ( 47) 00:09:16.042 5116.849 - 5142.055: 0.9817% ( 81) 00:09:16.042 5142.055 - 5167.262: 1.4854% ( 98) 00:09:16.042 5167.262 - 5192.468: 2.1741% ( 134) 00:09:16.042 5192.468 - 5217.674: 3.0633% ( 173) 00:09:16.042 5217.674 - 5242.880: 3.9525% ( 173) 00:09:16.042 5242.880 - 5268.086: 4.8571% ( 176) 00:09:16.042 5268.086 - 5293.292: 5.8388% ( 191) 00:09:16.042 5293.292 - 5318.498: 6.8514% ( 197) 00:09:16.042 5318.498 - 5343.705: 7.8999% ( 204) 00:09:16.042 5343.705 - 5368.911: 9.0666% ( 227) 00:09:16.042 5368.911 - 5394.117: 10.1151% ( 204) 00:09:16.042 5394.117 - 5419.323: 11.2253% ( 216) 00:09:16.042 5419.323 - 5444.529: 12.3921% ( 227) 00:09:16.042 5444.529 - 5469.735: 13.5280% ( 221) 00:09:16.042 5469.735 - 5494.942: 14.6639% ( 221) 00:09:16.042 5494.942 - 5520.148: 15.8666% ( 234) 00:09:16.042 5520.148 - 5545.354: 17.1155% ( 243) 00:09:16.042 5545.354 - 5570.560: 18.3285% ( 236) 00:09:16.042 5570.560 - 5595.766: 19.5004% ( 228) 00:09:16.042 5595.766 - 5620.972: 20.6928% ( 232) 00:09:16.042 5620.972 - 5646.178: 21.9161% ( 238) 00:09:16.042 5646.178 - 5671.385: 23.1343% ( 237) 00:09:16.042 5671.385 - 5696.591: 24.3729% ( 241) 00:09:16.042 5696.591 - 5721.797: 25.5911% ( 237) 00:09:16.042 5721.797 - 5747.003: 26.8452% ( 244) 00:09:16.042 5747.003 - 5772.209: 28.0993% ( 244) 00:09:16.042 5772.209 - 5797.415: 29.3380% ( 241) 00:09:16.042 5797.415 - 5822.622: 30.6024% ( 246) 00:09:16.042 5822.622 - 5847.828: 31.8205% ( 237) 00:09:16.042 5847.828 - 5873.034: 33.0900% ( 247) 00:09:16.042 5873.034 - 5898.240: 34.3801% ( 251) 00:09:16.042 5898.240 - 5923.446: 35.6600% ( 249) 00:09:16.042 5923.446 - 5948.652: 36.9089% ( 243) 00:09:16.042 5948.652 - 5973.858: 38.2042% ( 252) 00:09:16.042 5973.858 - 5999.065: 39.5354% ( 259) 00:09:16.042 5999.065 - 6024.271: 40.8152% ( 249) 00:09:16.042 6024.271 - 6049.477: 42.1207% ( 254) 00:09:16.042 6049.477 - 6074.683: 43.4365% ( 256) 00:09:16.042 6074.683 - 6099.889: 44.6957% ( 245) 00:09:16.042 6099.889 - 6125.095: 46.0372% ( 261) 00:09:16.042 6125.095 - 6150.302: 47.3376% ( 253) 00:09:16.042 6150.302 - 6175.508: 48.6534% ( 256) 00:09:16.042 6175.508 - 6200.714: 49.9640% ( 255) 00:09:16.042 6200.714 - 6225.920: 51.2644% ( 253) 00:09:16.042 6225.920 - 6251.126: 52.6264% ( 265) 00:09:16.042 6251.126 - 6276.332: 53.9474% ( 257) 00:09:16.042 6276.332 - 6301.538: 55.2632% ( 256) 00:09:16.042 6301.538 - 6326.745: 56.5738% ( 255) 00:09:16.042 6326.745 - 6351.951: 57.9359% ( 265) 00:09:16.042 6351.951 - 6377.157: 59.3133% ( 268) 00:09:16.042 6377.157 - 6402.363: 60.6805% ( 266) 00:09:16.042 6402.363 - 6427.569: 62.0014% ( 257) 00:09:16.042 6427.569 - 6452.775: 63.3326% ( 259) 00:09:16.042 6452.775 - 6503.188: 65.9848% ( 516) 00:09:16.042 6503.188 - 6553.600: 68.6215% ( 513) 00:09:16.042 6553.600 - 6604.012: 71.2685% ( 515) 00:09:16.042 6604.012 - 6654.425: 73.9052% ( 513) 00:09:16.042 6654.425 - 6704.837: 76.4854% ( 502) 00:09:16.042 6704.837 - 6755.249: 78.9114% ( 472) 00:09:16.042 6755.249 - 6805.662: 81.0495% ( 416) 00:09:16.042 6805.662 - 6856.074: 82.8793% ( 356) 00:09:16.042 6856.074 - 6906.486: 84.1129% ( 240) 00:09:16.042 6906.486 - 6956.898: 85.0278% ( 178) 00:09:16.042 6956.898 - 7007.311: 85.7627% ( 143) 00:09:16.042 7007.311 - 7057.723: 86.4463% ( 133) 00:09:16.042 7057.723 - 7108.135: 87.0271% ( 113) 00:09:16.042 7108.135 - 7158.548: 87.5308% ( 98) 00:09:16.042 7158.548 - 7208.960: 87.9780% ( 87) 00:09:16.042 7208.960 - 7259.372: 88.3943% ( 81) 00:09:16.042 7259.372 - 7309.785: 88.7644% ( 72) 00:09:16.042 7309.785 - 7360.197: 89.1036% ( 66) 00:09:16.042 7360.197 - 7410.609: 89.4223% ( 62) 00:09:16.042 7410.609 - 7461.022: 89.7255% ( 59) 00:09:16.042 7461.022 - 7511.434: 89.9877% ( 51) 00:09:16.042 7511.434 - 7561.846: 90.2395% ( 49) 00:09:16.042 7561.846 - 7612.258: 90.4502% ( 41) 00:09:16.042 7612.258 - 7662.671: 90.6199% ( 33) 00:09:16.042 7662.671 - 7713.083: 90.7689% ( 29) 00:09:16.042 7713.083 - 7763.495: 90.9025% ( 26) 00:09:16.042 7763.495 - 7813.908: 91.0310% ( 25) 00:09:16.042 7813.908 - 7864.320: 91.1338% ( 20) 00:09:16.042 7864.320 - 7914.732: 91.2366% ( 20) 00:09:16.042 7914.732 - 7965.145: 91.3240% ( 17) 00:09:16.042 7965.145 - 8015.557: 91.4011% ( 15) 00:09:16.042 8015.557 - 8065.969: 91.4679% ( 13) 00:09:16.042 8065.969 - 8116.382: 91.5707% ( 20) 00:09:16.042 8116.382 - 8166.794: 91.6632% ( 18) 00:09:16.042 8166.794 - 8217.206: 91.7455% ( 16) 00:09:16.042 8217.206 - 8267.618: 91.8380% ( 18) 00:09:16.042 8267.618 - 8318.031: 91.9305% ( 18) 00:09:16.042 8318.031 - 8368.443: 92.0230% ( 18) 00:09:16.042 8368.443 - 8418.855: 92.1155% ( 18) 00:09:16.042 8418.855 - 8469.268: 92.2235% ( 21) 00:09:16.042 8469.268 - 8519.680: 92.3109% ( 17) 00:09:16.042 8519.680 - 8570.092: 92.4137% ( 20) 00:09:16.042 8570.092 - 8620.505: 92.4959% ( 16) 00:09:16.042 8620.505 - 8670.917: 92.5935% ( 19) 00:09:16.042 8670.917 - 8721.329: 92.7066% ( 22) 00:09:16.042 8721.329 - 8771.742: 92.8094% ( 20) 00:09:16.042 8771.742 - 8822.154: 92.9122% ( 20) 00:09:16.042 8822.154 - 8872.566: 93.0201% ( 21) 00:09:16.042 8872.566 - 8922.978: 93.1178% ( 19) 00:09:16.042 8922.978 - 8973.391: 93.2155% ( 19) 00:09:16.042 8973.391 - 9023.803: 93.3285% ( 22) 00:09:16.042 9023.803 - 9074.215: 93.4468% ( 23) 00:09:16.042 9074.215 - 9124.628: 93.5341% ( 17) 00:09:16.042 9124.628 - 9175.040: 93.6266% ( 18) 00:09:16.042 9175.040 - 9225.452: 93.7449% ( 23) 00:09:16.042 9225.452 - 9275.865: 93.8734% ( 25) 00:09:16.042 9275.865 - 9326.277: 93.9710% ( 19) 00:09:16.042 9326.277 - 9376.689: 94.0892% ( 23) 00:09:16.042 9376.689 - 9427.102: 94.2023% ( 22) 00:09:16.042 9427.102 - 9477.514: 94.3257% ( 24) 00:09:16.042 9477.514 - 9527.926: 94.4542% ( 25) 00:09:16.042 9527.926 - 9578.338: 94.5724% ( 23) 00:09:16.042 9578.338 - 9628.751: 94.6957% ( 24) 00:09:16.042 9628.751 - 9679.163: 94.8191% ( 24) 00:09:16.042 9679.163 - 9729.575: 94.9630% ( 28) 00:09:16.042 9729.575 - 9779.988: 95.0863% ( 24) 00:09:16.042 9779.988 - 9830.400: 95.2097% ( 24) 00:09:16.042 9830.400 - 9880.812: 95.3228% ( 22) 00:09:16.042 9880.812 - 9931.225: 95.4461% ( 24) 00:09:16.042 9931.225 - 9981.637: 95.5644% ( 23) 00:09:16.042 9981.637 - 10032.049: 95.6826% ( 23) 00:09:16.042 10032.049 - 10082.462: 95.8213% ( 27) 00:09:16.042 10082.462 - 10132.874: 95.9241% ( 20) 00:09:16.042 10132.874 - 10183.286: 96.0321% ( 21) 00:09:16.042 10183.286 - 10233.698: 96.1400% ( 21) 00:09:16.042 10233.698 - 10284.111: 96.2428% ( 20) 00:09:16.042 10284.111 - 10334.523: 96.3353% ( 18) 00:09:16.042 10334.523 - 10384.935: 96.4330% ( 19) 00:09:16.042 10384.935 - 10435.348: 96.5152% ( 16) 00:09:16.042 10435.348 - 10485.760: 96.6026% ( 17) 00:09:16.042 10485.760 - 10536.172: 96.7002% ( 19) 00:09:16.042 10536.172 - 10586.585: 96.7825% ( 16) 00:09:16.042 10586.585 - 10636.997: 96.8699% ( 17) 00:09:16.042 10636.997 - 10687.409: 96.9470% ( 15) 00:09:16.042 10687.409 - 10737.822: 97.0086% ( 12) 00:09:16.042 10737.822 - 10788.234: 97.0806% ( 14) 00:09:16.042 10788.234 - 10838.646: 97.1474% ( 13) 00:09:16.042 10838.646 - 10889.058: 97.2296% ( 16) 00:09:16.042 10889.058 - 10939.471: 97.2913% ( 12) 00:09:16.042 10939.471 - 10989.883: 97.3427% ( 10) 00:09:16.042 10989.883 - 11040.295: 97.3993% ( 11) 00:09:16.042 11040.295 - 11090.708: 97.4558% ( 11) 00:09:16.042 11090.708 - 11141.120: 97.5123% ( 11) 00:09:16.042 11141.120 - 11191.532: 97.5740% ( 12) 00:09:16.042 11191.532 - 11241.945: 97.6357% ( 12) 00:09:16.042 11241.945 - 11292.357: 97.6974% ( 12) 00:09:16.042 11292.357 - 11342.769: 97.7333% ( 7) 00:09:16.042 11342.769 - 11393.182: 97.7539% ( 4) 00:09:16.042 11393.182 - 11443.594: 97.7745% ( 4) 00:09:16.042 11443.594 - 11494.006: 97.7899% ( 3) 00:09:16.042 11494.006 - 11544.418: 97.8104% ( 4) 00:09:16.042 11544.418 - 11594.831: 97.8310% ( 4) 00:09:16.042 11594.831 - 11645.243: 97.8516% ( 4) 00:09:16.042 11645.243 - 11695.655: 97.8721% ( 4) 00:09:16.042 11695.655 - 11746.068: 97.8927% ( 4) 00:09:16.042 11746.068 - 11796.480: 97.9132% ( 4) 00:09:16.042 11796.480 - 11846.892: 97.9338% ( 4) 00:09:16.042 11846.892 - 11897.305: 97.9492% ( 3) 00:09:16.042 11897.305 - 11947.717: 97.9698% ( 4) 00:09:16.042 11947.717 - 11998.129: 97.9903% ( 4) 00:09:16.042 11998.129 - 12048.542: 98.0058% ( 3) 00:09:16.042 12048.542 - 12098.954: 98.0263% ( 4) 00:09:16.042 12199.778 - 12250.191: 98.0366% ( 2) 00:09:16.043 12250.191 - 12300.603: 98.0469% ( 2) 00:09:16.043 12300.603 - 12351.015: 98.0572% ( 2) 00:09:16.043 12351.015 - 12401.428: 98.0674% ( 2) 00:09:16.043 12401.428 - 12451.840: 98.0777% ( 2) 00:09:16.043 12451.840 - 12502.252: 98.0880% ( 2) 00:09:16.043 12502.252 - 12552.665: 98.0931% ( 1) 00:09:16.043 12552.665 - 12603.077: 98.1034% ( 2) 00:09:16.043 12603.077 - 12653.489: 98.1137% ( 2) 00:09:16.043 12653.489 - 12703.902: 98.1343% ( 4) 00:09:16.043 12703.902 - 12754.314: 98.1548% ( 4) 00:09:16.043 12754.314 - 12804.726: 98.1754% ( 4) 00:09:16.043 12804.726 - 12855.138: 98.2011% ( 5) 00:09:16.043 12855.138 - 12905.551: 98.2268% ( 5) 00:09:16.043 12905.551 - 13006.375: 98.2884% ( 12) 00:09:16.043 13006.375 - 13107.200: 98.3501% ( 12) 00:09:16.043 13107.200 - 13208.025: 98.4169% ( 13) 00:09:16.043 13208.025 - 13308.849: 98.4735% ( 11) 00:09:16.043 13308.849 - 13409.674: 98.5300% ( 11) 00:09:16.043 13409.674 - 13510.498: 98.5866% ( 11) 00:09:16.043 13510.498 - 13611.323: 98.6534% ( 13) 00:09:16.043 13611.323 - 13712.148: 98.7048% ( 10) 00:09:16.043 13712.148 - 13812.972: 98.7716% ( 13) 00:09:16.043 13812.972 - 13913.797: 98.8333% ( 12) 00:09:16.043 13913.797 - 14014.622: 98.8898% ( 11) 00:09:16.043 14014.622 - 14115.446: 98.9463% ( 11) 00:09:16.043 14115.446 - 14216.271: 99.0080% ( 12) 00:09:16.043 14216.271 - 14317.095: 99.0697% ( 12) 00:09:16.043 14317.095 - 14417.920: 99.1211% ( 10) 00:09:16.043 14417.920 - 14518.745: 99.1468% ( 5) 00:09:16.043 14518.745 - 14619.569: 99.1674% ( 4) 00:09:16.043 14619.569 - 14720.394: 99.1879% ( 4) 00:09:16.043 14720.394 - 14821.218: 99.2085% ( 4) 00:09:16.043 14821.218 - 14922.043: 99.2342% ( 5) 00:09:16.043 14922.043 - 15022.868: 99.2547% ( 4) 00:09:16.043 15022.868 - 15123.692: 99.2753% ( 4) 00:09:16.043 15123.692 - 15224.517: 99.2958% ( 4) 00:09:16.043 15224.517 - 15325.342: 99.3164% ( 4) 00:09:16.043 15325.342 - 15426.166: 99.3370% ( 4) 00:09:16.043 15426.166 - 15526.991: 99.3421% ( 1) 00:09:16.043 23290.486 - 23391.311: 99.3575% ( 3) 00:09:16.043 23391.311 - 23492.135: 99.3781% ( 4) 00:09:16.043 23492.135 - 23592.960: 99.3986% ( 4) 00:09:16.043 23592.960 - 23693.785: 99.4192% ( 4) 00:09:16.043 23693.785 - 23794.609: 99.4398% ( 4) 00:09:16.043 23794.609 - 23895.434: 99.4603% ( 4) 00:09:16.043 23895.434 - 23996.258: 99.4860% ( 5) 00:09:16.043 23996.258 - 24097.083: 99.5066% ( 4) 00:09:16.043 24097.083 - 24197.908: 99.5271% ( 4) 00:09:16.043 24197.908 - 24298.732: 99.5477% ( 4) 00:09:16.043 24298.732 - 24399.557: 99.5734% ( 5) 00:09:16.043 24399.557 - 24500.382: 99.5940% ( 4) 00:09:16.043 24500.382 - 24601.206: 99.6145% ( 4) 00:09:16.043 24601.206 - 24702.031: 99.6351% ( 4) 00:09:16.043 24702.031 - 24802.855: 99.6608% ( 5) 00:09:16.043 24802.855 - 24903.680: 99.6813% ( 4) 00:09:16.043 24903.680 - 25004.505: 99.7019% ( 4) 00:09:16.043 25004.505 - 25105.329: 99.7173% ( 3) 00:09:16.043 25105.329 - 25206.154: 99.7379% ( 4) 00:09:16.043 25206.154 - 25306.978: 99.7636% ( 5) 00:09:16.043 25306.978 - 25407.803: 99.7841% ( 4) 00:09:16.043 25407.803 - 25508.628: 99.8047% ( 4) 00:09:16.043 25508.628 - 25609.452: 99.8252% ( 4) 00:09:16.043 25609.452 - 25710.277: 99.8458% ( 4) 00:09:16.043 25710.277 - 25811.102: 99.8715% ( 5) 00:09:16.043 25811.102 - 26012.751: 99.9126% ( 8) 00:09:16.043 26012.751 - 26214.400: 99.9537% ( 8) 00:09:16.043 26214.400 - 26416.049: 99.9949% ( 8) 00:09:16.043 26416.049 - 26617.698: 100.0000% ( 1) 00:09:16.043 00:09:16.043 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:16.043 ============================================================================== 00:09:16.043 Range in us Cumulative IO count 00:09:16.043 4965.612 - 4990.818: 0.0153% ( 3) 00:09:16.043 4990.818 - 5016.025: 0.0357% ( 4) 00:09:16.043 5016.025 - 5041.231: 0.0817% ( 9) 00:09:16.043 5041.231 - 5066.437: 0.2145% ( 26) 00:09:16.043 5066.437 - 5091.643: 0.3523% ( 27) 00:09:16.043 5091.643 - 5116.849: 0.6281% ( 54) 00:09:16.043 5116.849 - 5142.055: 0.9804% ( 69) 00:09:16.043 5142.055 - 5167.262: 1.5625% ( 114) 00:09:16.043 5167.262 - 5192.468: 2.1957% ( 124) 00:09:16.043 5192.468 - 5217.674: 2.9259% ( 143) 00:09:16.043 5217.674 - 5242.880: 3.8092% ( 173) 00:09:16.043 5242.880 - 5268.086: 4.7743% ( 189) 00:09:16.043 5268.086 - 5293.292: 5.7445% ( 190) 00:09:16.043 5293.292 - 5318.498: 6.7913% ( 205) 00:09:16.043 5318.498 - 5343.705: 7.8074% ( 199) 00:09:16.043 5343.705 - 5368.911: 8.8950% ( 213) 00:09:16.043 5368.911 - 5394.117: 9.9929% ( 215) 00:09:16.043 5394.117 - 5419.323: 11.0703% ( 211) 00:09:16.043 5419.323 - 5444.529: 12.1528% ( 212) 00:09:16.043 5444.529 - 5469.735: 13.2915% ( 223) 00:09:16.043 5469.735 - 5494.942: 14.4301% ( 223) 00:09:16.043 5494.942 - 5520.148: 15.6250% ( 234) 00:09:16.043 5520.148 - 5545.354: 16.8199% ( 234) 00:09:16.043 5545.354 - 5570.560: 18.0045% ( 232) 00:09:16.043 5570.560 - 5595.766: 19.1993% ( 234) 00:09:16.043 5595.766 - 5620.972: 20.4095% ( 237) 00:09:16.043 5620.972 - 5646.178: 21.5635% ( 226) 00:09:16.043 5646.178 - 5671.385: 22.7992% ( 242) 00:09:16.043 5671.385 - 5696.591: 24.0911% ( 253) 00:09:16.043 5696.591 - 5721.797: 25.3166% ( 240) 00:09:16.043 5721.797 - 5747.003: 26.5676% ( 245) 00:09:16.043 5747.003 - 5772.209: 27.8288% ( 247) 00:09:16.043 5772.209 - 5797.415: 29.1156% ( 252) 00:09:16.043 5797.415 - 5822.622: 30.3768% ( 247) 00:09:16.043 5822.622 - 5847.828: 31.6483% ( 249) 00:09:16.043 5847.828 - 5873.034: 32.9402% ( 253) 00:09:16.043 5873.034 - 5898.240: 34.2218% ( 251) 00:09:16.043 5898.240 - 5923.446: 35.5035% ( 251) 00:09:16.043 5923.446 - 5948.652: 36.8311% ( 260) 00:09:16.043 5948.652 - 5973.858: 38.1230% ( 253) 00:09:16.043 5973.858 - 5999.065: 39.4097% ( 252) 00:09:16.043 5999.065 - 6024.271: 40.7271% ( 258) 00:09:16.043 6024.271 - 6049.477: 42.0547% ( 260) 00:09:16.043 6049.477 - 6074.683: 43.3619% ( 256) 00:09:16.043 6074.683 - 6099.889: 44.6844% ( 259) 00:09:16.043 6099.889 - 6125.095: 45.9150% ( 241) 00:09:16.043 6125.095 - 6150.302: 47.1763% ( 247) 00:09:16.043 6150.302 - 6175.508: 48.4783% ( 255) 00:09:16.043 6175.508 - 6200.714: 49.7447% ( 248) 00:09:16.043 6200.714 - 6225.920: 51.0672% ( 259) 00:09:16.043 6225.920 - 6251.126: 52.3897% ( 259) 00:09:16.043 6251.126 - 6276.332: 53.7020% ( 257) 00:09:16.043 6276.332 - 6301.538: 55.0245% ( 259) 00:09:16.043 6301.538 - 6326.745: 56.3368% ( 257) 00:09:16.043 6326.745 - 6351.951: 57.6287% ( 253) 00:09:16.043 6351.951 - 6377.157: 58.9308% ( 255) 00:09:16.043 6377.157 - 6402.363: 60.2482% ( 258) 00:09:16.043 6402.363 - 6427.569: 61.5247% ( 250) 00:09:16.043 6427.569 - 6452.775: 62.8370% ( 257) 00:09:16.043 6452.775 - 6503.188: 65.4871% ( 519) 00:09:16.043 6503.188 - 6553.600: 68.1577% ( 523) 00:09:16.043 6553.600 - 6604.012: 70.8487% ( 527) 00:09:16.043 6604.012 - 6654.425: 73.5039% ( 520) 00:09:16.043 6654.425 - 6704.837: 76.0723% ( 503) 00:09:16.043 6704.837 - 6755.249: 78.5233% ( 480) 00:09:16.043 6755.249 - 6805.662: 80.6781% ( 422) 00:09:16.043 6805.662 - 6856.074: 82.5521% ( 367) 00:09:16.043 6856.074 - 6906.486: 83.8491% ( 254) 00:09:16.043 6906.486 - 6956.898: 84.7324% ( 173) 00:09:16.043 6956.898 - 7007.311: 85.4779% ( 146) 00:09:16.043 7007.311 - 7057.723: 86.1775% ( 137) 00:09:16.043 7057.723 - 7108.135: 86.8056% ( 123) 00:09:16.043 7108.135 - 7158.548: 87.3468% ( 106) 00:09:16.043 7158.548 - 7208.960: 87.7911% ( 87) 00:09:16.043 7208.960 - 7259.372: 88.2251% ( 85) 00:09:16.043 7259.372 - 7309.785: 88.6183% ( 77) 00:09:16.043 7309.785 - 7360.197: 88.9706% ( 69) 00:09:16.043 7360.197 - 7410.609: 89.2923% ( 63) 00:09:16.043 7410.609 - 7461.022: 89.5680% ( 54) 00:09:16.043 7461.022 - 7511.434: 89.7876% ( 43) 00:09:16.043 7511.434 - 7561.846: 89.9765% ( 37) 00:09:16.043 7561.846 - 7612.258: 90.1348% ( 31) 00:09:16.043 7612.258 - 7662.671: 90.2676% ( 26) 00:09:16.043 7662.671 - 7713.083: 90.3850% ( 23) 00:09:16.043 7713.083 - 7763.495: 90.4973% ( 22) 00:09:16.043 7763.495 - 7813.908: 90.6199% ( 24) 00:09:16.043 7813.908 - 7864.320: 90.7527% ( 26) 00:09:16.043 7864.320 - 7914.732: 90.8854% ( 26) 00:09:16.043 7914.732 - 7965.145: 91.0131% ( 25) 00:09:16.043 7965.145 - 8015.557: 91.1509% ( 27) 00:09:16.043 8015.557 - 8065.969: 91.3041% ( 30) 00:09:16.043 8065.969 - 8116.382: 91.4420% ( 27) 00:09:16.043 8116.382 - 8166.794: 91.5901% ( 29) 00:09:16.043 8166.794 - 8217.206: 91.7228% ( 26) 00:09:16.043 8217.206 - 8267.618: 91.8301% ( 21) 00:09:16.043 8267.618 - 8318.031: 91.9424% ( 22) 00:09:16.043 8318.031 - 8368.443: 92.0547% ( 22) 00:09:16.043 8368.443 - 8418.855: 92.1569% ( 20) 00:09:16.043 8418.855 - 8469.268: 92.2488% ( 18) 00:09:16.043 8469.268 - 8519.680: 92.3356% ( 17) 00:09:16.043 8519.680 - 8570.092: 92.4428% ( 21) 00:09:16.043 8570.092 - 8620.505: 92.5551% ( 22) 00:09:16.043 8620.505 - 8670.917: 92.6522% ( 19) 00:09:16.043 8670.917 - 8721.329: 92.7492% ( 19) 00:09:16.043 8721.329 - 8771.742: 92.8513% ( 20) 00:09:16.043 8771.742 - 8822.154: 92.9534% ( 20) 00:09:16.043 8822.154 - 8872.566: 93.0504% ( 19) 00:09:16.043 8872.566 - 8922.978: 93.1526% ( 20) 00:09:16.043 8922.978 - 8973.391: 93.2547% ( 20) 00:09:16.043 8973.391 - 9023.803: 93.3670% ( 22) 00:09:16.043 9023.803 - 9074.215: 93.4743% ( 21) 00:09:16.043 9074.215 - 9124.628: 93.5764% ( 20) 00:09:16.043 9124.628 - 9175.040: 93.6887% ( 22) 00:09:16.043 9175.040 - 9225.452: 93.7806% ( 18) 00:09:16.043 9225.452 - 9275.865: 93.8981% ( 23) 00:09:16.043 9275.865 - 9326.277: 94.0053% ( 21) 00:09:16.043 9326.277 - 9376.689: 94.1176% ( 22) 00:09:16.043 9376.689 - 9427.102: 94.2504% ( 26) 00:09:16.043 9427.102 - 9477.514: 94.3730% ( 24) 00:09:16.043 9477.514 - 9527.926: 94.4751% ( 20) 00:09:16.043 9527.926 - 9578.338: 94.5772% ( 20) 00:09:16.043 9578.338 - 9628.751: 94.6742% ( 19) 00:09:16.044 9628.751 - 9679.163: 94.7559% ( 16) 00:09:16.044 9679.163 - 9729.575: 94.8683% ( 22) 00:09:16.044 9729.575 - 9779.988: 94.9397% ( 14) 00:09:16.044 9779.988 - 9830.400: 95.0163% ( 15) 00:09:16.044 9830.400 - 9880.812: 95.1031% ( 17) 00:09:16.044 9880.812 - 9931.225: 95.2104% ( 21) 00:09:16.044 9931.225 - 9981.637: 95.3023% ( 18) 00:09:16.044 9981.637 - 10032.049: 95.3891% ( 17) 00:09:16.044 10032.049 - 10082.462: 95.4759% ( 17) 00:09:16.044 10082.462 - 10132.874: 95.5627% ( 17) 00:09:16.044 10132.874 - 10183.286: 95.6495% ( 17) 00:09:16.044 10183.286 - 10233.698: 95.7414% ( 18) 00:09:16.044 10233.698 - 10284.111: 95.8129% ( 14) 00:09:16.044 10284.111 - 10334.523: 95.8793% ( 13) 00:09:16.044 10334.523 - 10384.935: 95.9457% ( 13) 00:09:16.044 10384.935 - 10435.348: 96.0223% ( 15) 00:09:16.044 10435.348 - 10485.760: 96.1142% ( 18) 00:09:16.044 10485.760 - 10536.172: 96.2061% ( 18) 00:09:16.044 10536.172 - 10586.585: 96.2929% ( 17) 00:09:16.044 10586.585 - 10636.997: 96.3848% ( 18) 00:09:16.044 10636.997 - 10687.409: 96.4665% ( 16) 00:09:16.044 10687.409 - 10737.822: 96.5533% ( 17) 00:09:16.044 10737.822 - 10788.234: 96.6350% ( 16) 00:09:16.044 10788.234 - 10838.646: 96.7269% ( 18) 00:09:16.044 10838.646 - 10889.058: 96.8035% ( 15) 00:09:16.044 10889.058 - 10939.471: 96.9005% ( 19) 00:09:16.044 10939.471 - 10989.883: 96.9873% ( 17) 00:09:16.044 10989.883 - 11040.295: 97.0792% ( 18) 00:09:16.044 11040.295 - 11090.708: 97.1763% ( 19) 00:09:16.044 11090.708 - 11141.120: 97.2478% ( 14) 00:09:16.044 11141.120 - 11191.532: 97.3346% ( 17) 00:09:16.044 11191.532 - 11241.945: 97.4163% ( 16) 00:09:16.044 11241.945 - 11292.357: 97.4877% ( 14) 00:09:16.044 11292.357 - 11342.769: 97.5643% ( 15) 00:09:16.044 11342.769 - 11393.182: 97.6307% ( 13) 00:09:16.044 11393.182 - 11443.594: 97.6920% ( 12) 00:09:16.044 11443.594 - 11494.006: 97.7431% ( 10) 00:09:16.044 11494.006 - 11544.418: 97.8043% ( 12) 00:09:16.044 11544.418 - 11594.831: 97.8656% ( 12) 00:09:16.044 11594.831 - 11645.243: 97.9269% ( 12) 00:09:16.044 11645.243 - 11695.655: 97.9677% ( 8) 00:09:16.044 11695.655 - 11746.068: 98.0035% ( 7) 00:09:16.044 11746.068 - 11796.480: 98.0545% ( 10) 00:09:16.044 11796.480 - 11846.892: 98.1056% ( 10) 00:09:16.044 11846.892 - 11897.305: 98.1567% ( 10) 00:09:16.044 11897.305 - 11947.717: 98.2077% ( 10) 00:09:16.044 11947.717 - 11998.129: 98.2639% ( 11) 00:09:16.044 11998.129 - 12048.542: 98.3098% ( 9) 00:09:16.044 12048.542 - 12098.954: 98.3507% ( 8) 00:09:16.044 12098.954 - 12149.366: 98.3711% ( 4) 00:09:16.044 12149.366 - 12199.778: 98.3915% ( 4) 00:09:16.044 12199.778 - 12250.191: 98.4120% ( 4) 00:09:16.044 12250.191 - 12300.603: 98.4324% ( 4) 00:09:16.044 12300.603 - 12351.015: 98.4528% ( 4) 00:09:16.044 12351.015 - 12401.428: 98.4783% ( 5) 00:09:16.044 12401.428 - 12451.840: 98.4988% ( 4) 00:09:16.044 12451.840 - 12502.252: 98.5243% ( 5) 00:09:16.044 12502.252 - 12552.665: 98.5447% ( 4) 00:09:16.044 12552.665 - 12603.077: 98.5652% ( 4) 00:09:16.044 12603.077 - 12653.489: 98.5856% ( 4) 00:09:16.044 12653.489 - 12703.902: 98.6060% ( 4) 00:09:16.044 12703.902 - 12754.314: 98.6264% ( 4) 00:09:16.044 12754.314 - 12804.726: 98.6469% ( 4) 00:09:16.044 12804.726 - 12855.138: 98.6673% ( 4) 00:09:16.044 12855.138 - 12905.551: 98.6877% ( 4) 00:09:16.044 12905.551 - 13006.375: 98.7286% ( 8) 00:09:16.044 13006.375 - 13107.200: 98.7745% ( 9) 00:09:16.044 13107.200 - 13208.025: 98.8154% ( 8) 00:09:16.044 13208.025 - 13308.849: 98.8562% ( 8) 00:09:16.044 13308.849 - 13409.674: 98.8971% ( 8) 00:09:16.044 13409.674 - 13510.498: 98.9379% ( 8) 00:09:16.044 13510.498 - 13611.323: 98.9788% ( 8) 00:09:16.044 13611.323 - 13712.148: 99.0196% ( 8) 00:09:16.044 13712.148 - 13812.972: 99.0656% ( 9) 00:09:16.044 13812.972 - 13913.797: 99.1013% ( 7) 00:09:16.044 13913.797 - 14014.622: 99.1422% ( 8) 00:09:16.044 14014.622 - 14115.446: 99.1779% ( 7) 00:09:16.044 14115.446 - 14216.271: 99.2085% ( 6) 00:09:16.044 14216.271 - 14317.095: 99.2341% ( 5) 00:09:16.044 14317.095 - 14417.920: 99.2545% ( 4) 00:09:16.044 14417.920 - 14518.745: 99.2749% ( 4) 00:09:16.044 14518.745 - 14619.569: 99.2953% ( 4) 00:09:16.044 14619.569 - 14720.394: 99.3158% ( 4) 00:09:16.044 14720.394 - 14821.218: 99.3413% ( 5) 00:09:16.044 14821.218 - 14922.043: 99.3464% ( 1) 00:09:16.044 15526.991 - 15627.815: 99.3668% ( 4) 00:09:16.044 15627.815 - 15728.640: 99.3924% ( 5) 00:09:16.044 15728.640 - 15829.465: 99.4128% ( 4) 00:09:16.044 15829.465 - 15930.289: 99.4332% ( 4) 00:09:16.044 15930.289 - 16031.114: 99.4536% ( 4) 00:09:16.044 16031.114 - 16131.938: 99.4741% ( 4) 00:09:16.044 16131.938 - 16232.763: 99.4945% ( 4) 00:09:16.044 16232.763 - 16333.588: 99.5149% ( 4) 00:09:16.044 16333.588 - 16434.412: 99.5404% ( 5) 00:09:16.044 16434.412 - 16535.237: 99.5609% ( 4) 00:09:16.044 16535.237 - 16636.062: 99.5813% ( 4) 00:09:16.044 16636.062 - 16736.886: 99.6017% ( 4) 00:09:16.044 16736.886 - 16837.711: 99.6272% ( 5) 00:09:16.044 16837.711 - 16938.535: 99.6477% ( 4) 00:09:16.044 16938.535 - 17039.360: 99.6681% ( 4) 00:09:16.044 17039.360 - 17140.185: 99.6885% ( 4) 00:09:16.044 17140.185 - 17241.009: 99.7141% ( 5) 00:09:16.044 17241.009 - 17341.834: 99.7345% ( 4) 00:09:16.044 17341.834 - 17442.658: 99.7549% ( 4) 00:09:16.044 17442.658 - 17543.483: 99.7753% ( 4) 00:09:16.044 17543.483 - 17644.308: 99.7958% ( 4) 00:09:16.044 17644.308 - 17745.132: 99.8162% ( 4) 00:09:16.044 17745.132 - 17845.957: 99.8417% ( 5) 00:09:16.044 17845.957 - 17946.782: 99.8621% ( 4) 00:09:16.044 17946.782 - 18047.606: 99.8826% ( 4) 00:09:16.044 18047.606 - 18148.431: 99.9030% ( 4) 00:09:16.044 18148.431 - 18249.255: 99.9234% ( 4) 00:09:16.044 18249.255 - 18350.080: 99.9489% ( 5) 00:09:16.044 18350.080 - 18450.905: 99.9694% ( 4) 00:09:16.044 18450.905 - 18551.729: 99.9898% ( 4) 00:09:16.044 18551.729 - 18652.554: 100.0000% ( 2) 00:09:16.044 00:09:16.044 13:13:30 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:17.459 Initializing NVMe Controllers 00:09:17.459 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:17.459 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:17.459 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:17.459 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:17.459 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:17.459 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:17.459 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:17.459 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:17.459 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:17.459 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:17.459 Initialization complete. Launching workers. 00:09:17.459 ======================================================== 00:09:17.459 Latency(us) 00:09:17.459 Device Information : IOPS MiB/s Average min max 00:09:17.459 PCIE (0000:00:09.0) NSID 1 from core 0: 19980.55 234.15 6403.97 5060.58 26815.59 00:09:17.459 PCIE (0000:00:06.0) NSID 1 from core 0: 19980.55 234.15 6398.46 4915.23 26841.11 00:09:17.459 PCIE (0000:00:07.0) NSID 1 from core 0: 19980.55 234.15 6392.47 5087.25 26002.74 00:09:17.459 PCIE (0000:00:08.0) NSID 1 from core 0: 19980.55 234.15 6386.67 5126.60 25021.67 00:09:17.459 PCIE (0000:00:08.0) NSID 2 from core 0: 19980.55 234.15 6381.07 5060.55 24338.18 00:09:17.459 PCIE (0000:00:08.0) NSID 3 from core 0: 19980.55 234.15 6375.35 5124.88 23465.94 00:09:17.459 ======================================================== 00:09:17.459 Total : 119883.31 1404.88 6389.67 4915.23 26841.11 00:09:17.459 00:09:17.459 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:17.459 ================================================================================= 00:09:17.459 1.00000% : 5570.560us 00:09:17.459 10.00000% : 5822.622us 00:09:17.459 25.00000% : 6024.271us 00:09:17.459 50.00000% : 6225.920us 00:09:17.459 75.00000% : 6503.188us 00:09:17.459 90.00000% : 6856.074us 00:09:17.459 95.00000% : 7057.723us 00:09:17.459 98.00000% : 7561.846us 00:09:17.459 99.00000% : 10032.049us 00:09:17.459 99.50000% : 23391.311us 00:09:17.459 99.90000% : 26214.400us 00:09:17.459 99.99000% : 26819.348us 00:09:17.459 99.99900% : 26819.348us 00:09:17.459 99.99990% : 26819.348us 00:09:17.459 99.99999% : 26819.348us 00:09:17.459 00:09:17.459 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:17.459 ================================================================================= 00:09:17.459 1.00000% : 5343.705us 00:09:17.459 10.00000% : 5595.766us 00:09:17.459 25.00000% : 5822.622us 00:09:17.459 50.00000% : 6225.920us 00:09:17.459 75.00000% : 6704.837us 00:09:17.459 90.00000% : 7108.135us 00:09:17.459 95.00000% : 7309.785us 00:09:17.459 98.00000% : 7713.083us 00:09:17.459 99.00000% : 9225.452us 00:09:17.459 99.50000% : 23592.960us 00:09:17.459 99.90000% : 26416.049us 00:09:17.459 99.99000% : 26819.348us 00:09:17.459 99.99900% : 27020.997us 00:09:17.459 99.99990% : 27020.997us 00:09:17.459 99.99999% : 27020.997us 00:09:17.459 00:09:17.459 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:17.459 ================================================================================= 00:09:17.459 1.00000% : 5570.560us 00:09:17.459 10.00000% : 5822.622us 00:09:17.459 25.00000% : 6024.271us 00:09:17.459 50.00000% : 6225.920us 00:09:17.459 75.00000% : 6503.188us 00:09:17.459 90.00000% : 6856.074us 00:09:17.459 95.00000% : 7057.723us 00:09:17.459 98.00000% : 7461.022us 00:09:17.459 99.00000% : 9074.215us 00:09:17.459 99.50000% : 23290.486us 00:09:17.459 99.90000% : 25407.803us 00:09:17.459 99.99000% : 26012.751us 00:09:17.459 99.99900% : 26012.751us 00:09:17.459 99.99990% : 26012.751us 00:09:17.459 99.99999% : 26012.751us 00:09:17.459 00:09:17.459 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:17.459 ================================================================================= 00:09:17.459 1.00000% : 5545.354us 00:09:17.459 10.00000% : 5797.415us 00:09:17.459 25.00000% : 5999.065us 00:09:17.459 50.00000% : 6225.920us 00:09:17.459 75.00000% : 6553.600us 00:09:17.459 90.00000% : 6856.074us 00:09:17.459 95.00000% : 7057.723us 00:09:17.459 98.00000% : 7561.846us 00:09:17.459 99.00000% : 8620.505us 00:09:17.459 99.50000% : 23189.662us 00:09:17.459 99.90000% : 24399.557us 00:09:17.459 99.99000% : 25004.505us 00:09:17.459 99.99900% : 25105.329us 00:09:17.459 99.99990% : 25105.329us 00:09:17.459 99.99999% : 25105.329us 00:09:17.459 00:09:17.459 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:17.459 ================================================================================= 00:09:17.459 1.00000% : 5545.354us 00:09:17.459 10.00000% : 5822.622us 00:09:17.459 25.00000% : 5999.065us 00:09:17.459 50.00000% : 6225.920us 00:09:17.459 75.00000% : 6503.188us 00:09:17.459 90.00000% : 6856.074us 00:09:17.459 95.00000% : 7057.723us 00:09:17.459 98.00000% : 7763.495us 00:09:17.459 99.00000% : 8217.206us 00:09:17.459 99.50000% : 22282.240us 00:09:17.459 99.90000% : 23693.785us 00:09:17.459 99.99000% : 24298.732us 00:09:17.459 99.99900% : 24399.557us 00:09:17.459 99.99990% : 24399.557us 00:09:17.459 99.99999% : 24399.557us 00:09:17.459 00:09:17.459 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:17.460 ================================================================================= 00:09:17.460 1.00000% : 5545.354us 00:09:17.460 10.00000% : 5797.415us 00:09:17.460 25.00000% : 5999.065us 00:09:17.460 50.00000% : 6225.920us 00:09:17.460 75.00000% : 6503.188us 00:09:17.460 90.00000% : 6856.074us 00:09:17.460 95.00000% : 7057.723us 00:09:17.460 98.00000% : 7612.258us 00:09:17.460 99.00000% : 8469.268us 00:09:17.460 99.50000% : 21677.292us 00:09:17.460 99.90000% : 22786.363us 00:09:17.460 99.99000% : 23492.135us 00:09:17.460 99.99900% : 23492.135us 00:09:17.460 99.99990% : 23492.135us 00:09:17.460 99.99999% : 23492.135us 00:09:17.460 00:09:17.460 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:17.460 ============================================================================== 00:09:17.460 Range in us Cumulative IO count 00:09:17.460 5041.231 - 5066.437: 0.0100% ( 2) 00:09:17.460 5142.055 - 5167.262: 0.0149% ( 1) 00:09:17.460 5167.262 - 5192.468: 0.0199% ( 1) 00:09:17.460 5192.468 - 5217.674: 0.0398% ( 4) 00:09:17.460 5217.674 - 5242.880: 0.0448% ( 1) 00:09:17.460 5242.880 - 5268.086: 0.0547% ( 2) 00:09:17.460 5268.086 - 5293.292: 0.0896% ( 7) 00:09:17.460 5293.292 - 5318.498: 0.1145% ( 5) 00:09:17.460 5318.498 - 5343.705: 0.1344% ( 4) 00:09:17.460 5343.705 - 5368.911: 0.1692% ( 7) 00:09:17.460 5368.911 - 5394.117: 0.1841% ( 3) 00:09:17.460 5394.117 - 5419.323: 0.2438% ( 12) 00:09:17.460 5419.323 - 5444.529: 0.2986% ( 11) 00:09:17.460 5444.529 - 5469.735: 0.4031% ( 21) 00:09:17.460 5469.735 - 5494.942: 0.5076% ( 21) 00:09:17.460 5494.942 - 5520.148: 0.6817% ( 35) 00:09:17.460 5520.148 - 5545.354: 0.9256% ( 49) 00:09:17.460 5545.354 - 5570.560: 1.2490% ( 65) 00:09:17.460 5570.560 - 5595.766: 1.6570% ( 82) 00:09:17.460 5595.766 - 5620.972: 2.1447% ( 98) 00:09:17.460 5620.972 - 5646.178: 2.8115% ( 134) 00:09:17.460 5646.178 - 5671.385: 3.7172% ( 182) 00:09:17.460 5671.385 - 5696.591: 4.7124% ( 200) 00:09:17.460 5696.591 - 5721.797: 5.7474% ( 208) 00:09:17.460 5721.797 - 5747.003: 6.8969% ( 231) 00:09:17.460 5747.003 - 5772.209: 8.2255% ( 267) 00:09:17.460 5772.209 - 5797.415: 9.6089% ( 278) 00:09:17.460 5797.415 - 5822.622: 11.1913% ( 318) 00:09:17.460 5822.622 - 5847.828: 12.7737% ( 318) 00:09:17.460 5847.828 - 5873.034: 14.4656% ( 340) 00:09:17.460 5873.034 - 5898.240: 16.2769% ( 364) 00:09:17.460 5898.240 - 5923.446: 18.1379% ( 374) 00:09:17.460 5923.446 - 5948.652: 20.0239% ( 379) 00:09:17.460 5948.652 - 5973.858: 21.9895% ( 395) 00:09:17.460 5973.858 - 5999.065: 24.8010% ( 565) 00:09:17.460 5999.065 - 6024.271: 27.4930% ( 541) 00:09:17.460 6024.271 - 6049.477: 30.3394% ( 572) 00:09:17.460 6049.477 - 6074.683: 32.8672% ( 508) 00:09:17.460 6074.683 - 6099.889: 35.2209% ( 473) 00:09:17.460 6099.889 - 6125.095: 38.4554% ( 650) 00:09:17.460 6125.095 - 6150.302: 42.0084% ( 714) 00:09:17.460 6150.302 - 6175.508: 45.4220% ( 686) 00:09:17.460 6175.508 - 6200.714: 48.2036% ( 559) 00:09:17.460 6200.714 - 6225.920: 51.0549% ( 573) 00:09:17.460 6225.920 - 6251.126: 54.3292% ( 658) 00:09:17.460 6251.126 - 6276.332: 57.4393% ( 625) 00:09:17.460 6276.332 - 6301.538: 59.8129% ( 477) 00:09:17.460 6301.538 - 6326.745: 62.4552% ( 531) 00:09:17.460 6326.745 - 6351.951: 65.3563% ( 583) 00:09:17.460 6351.951 - 6377.157: 67.2920% ( 389) 00:09:17.460 6377.157 - 6402.363: 69.1829% ( 380) 00:09:17.460 6402.363 - 6427.569: 71.1982% ( 405) 00:09:17.460 6427.569 - 6452.775: 72.9150% ( 345) 00:09:17.460 6452.775 - 6503.188: 75.4727% ( 514) 00:09:17.460 6503.188 - 6553.600: 77.9757% ( 503) 00:09:17.460 6553.600 - 6604.012: 80.5384% ( 515) 00:09:17.460 6604.012 - 6654.425: 83.1210% ( 519) 00:09:17.460 6654.425 - 6704.837: 85.5444% ( 487) 00:09:17.460 6704.837 - 6755.249: 87.6791% ( 429) 00:09:17.460 6755.249 - 6805.662: 89.7492% ( 416) 00:09:17.460 6805.662 - 6856.074: 91.3863% ( 329) 00:09:17.460 6856.074 - 6906.486: 92.8344% ( 291) 00:09:17.460 6906.486 - 6956.898: 93.9590% ( 226) 00:09:17.460 6956.898 - 7007.311: 94.8597% ( 181) 00:09:17.460 7007.311 - 7057.723: 95.5314% ( 135) 00:09:17.460 7057.723 - 7108.135: 95.9494% ( 84) 00:09:17.460 7108.135 - 7158.548: 96.3027% ( 71) 00:09:17.460 7158.548 - 7208.960: 96.6361% ( 67) 00:09:17.460 7208.960 - 7259.372: 96.9049% ( 54) 00:09:17.460 7259.372 - 7309.785: 97.1537% ( 50) 00:09:17.460 7309.785 - 7360.197: 97.3776% ( 45) 00:09:17.460 7360.197 - 7410.609: 97.6662% ( 58) 00:09:17.460 7410.609 - 7461.022: 97.8752% ( 42) 00:09:17.460 7461.022 - 7511.434: 97.9946% ( 24) 00:09:17.460 7511.434 - 7561.846: 98.0842% ( 18) 00:09:17.460 7561.846 - 7612.258: 98.1638% ( 16) 00:09:17.460 7612.258 - 7662.671: 98.2335% ( 14) 00:09:17.460 7662.671 - 7713.083: 98.2982% ( 13) 00:09:17.460 7713.083 - 7763.495: 98.3430% ( 9) 00:09:17.460 7763.495 - 7813.908: 98.3877% ( 9) 00:09:17.460 7813.908 - 7864.320: 98.4275% ( 8) 00:09:17.460 7864.320 - 7914.732: 98.4674% ( 8) 00:09:17.460 7914.732 - 7965.145: 98.4972% ( 6) 00:09:17.460 7965.145 - 8015.557: 98.5271% ( 6) 00:09:17.460 8015.557 - 8065.969: 98.5520% ( 5) 00:09:17.460 8065.969 - 8116.382: 98.5868% ( 7) 00:09:17.460 8116.382 - 8166.794: 98.6166% ( 6) 00:09:17.460 8166.794 - 8217.206: 98.6365% ( 4) 00:09:17.460 8217.206 - 8267.618: 98.6515% ( 3) 00:09:17.460 8267.618 - 8318.031: 98.6714% ( 4) 00:09:17.460 8318.031 - 8368.443: 98.6913% ( 4) 00:09:17.460 8368.443 - 8418.855: 98.7112% ( 4) 00:09:17.460 8418.855 - 8469.268: 98.7261% ( 3) 00:09:17.460 9578.338 - 9628.751: 98.7510% ( 5) 00:09:17.460 9628.751 - 9679.163: 98.7809% ( 6) 00:09:17.460 9679.163 - 9729.575: 98.8057% ( 5) 00:09:17.460 9729.575 - 9779.988: 98.8256% ( 4) 00:09:17.460 9779.988 - 9830.400: 98.8455% ( 4) 00:09:17.460 9830.400 - 9880.812: 98.8903% ( 9) 00:09:17.460 9880.812 - 9931.225: 98.9351% ( 9) 00:09:17.460 9931.225 - 9981.637: 98.9749% ( 8) 00:09:17.460 9981.637 - 10032.049: 99.0247% ( 10) 00:09:17.460 10032.049 - 10082.462: 99.0545% ( 6) 00:09:17.460 10082.462 - 10132.874: 99.0894% ( 7) 00:09:17.460 10132.874 - 10183.286: 99.1342% ( 9) 00:09:17.460 10183.286 - 10233.698: 99.1541% ( 4) 00:09:17.460 10233.698 - 10284.111: 99.1690% ( 3) 00:09:17.460 10284.111 - 10334.523: 99.1740% ( 1) 00:09:17.460 10334.523 - 10384.935: 99.1839% ( 2) 00:09:17.460 10384.935 - 10435.348: 99.1889% ( 1) 00:09:17.460 10435.348 - 10485.760: 99.1988% ( 2) 00:09:17.460 10485.760 - 10536.172: 99.2088% ( 2) 00:09:17.460 10536.172 - 10586.585: 99.2237% ( 3) 00:09:17.460 10586.585 - 10636.997: 99.2387% ( 3) 00:09:17.460 10636.997 - 10687.409: 99.2536% ( 3) 00:09:17.460 10687.409 - 10737.822: 99.2635% ( 2) 00:09:17.460 10737.822 - 10788.234: 99.2685% ( 1) 00:09:17.460 10788.234 - 10838.646: 99.2834% ( 3) 00:09:17.460 10838.646 - 10889.058: 99.2884% ( 1) 00:09:17.460 10889.058 - 10939.471: 99.2984% ( 2) 00:09:17.460 10939.471 - 10989.883: 99.3083% ( 2) 00:09:17.460 10989.883 - 11040.295: 99.3133% ( 1) 00:09:17.460 11040.295 - 11090.708: 99.3183% ( 1) 00:09:17.460 11090.708 - 11141.120: 99.3282% ( 2) 00:09:17.460 11141.120 - 11191.532: 99.3332% ( 1) 00:09:17.460 11191.532 - 11241.945: 99.3382% ( 1) 00:09:17.460 11241.945 - 11292.357: 99.3481% ( 2) 00:09:17.460 11292.357 - 11342.769: 99.3531% ( 1) 00:09:17.460 11342.769 - 11393.182: 99.3581% ( 1) 00:09:17.460 11393.182 - 11443.594: 99.3631% ( 1) 00:09:17.460 22282.240 - 22383.065: 99.3730% ( 2) 00:09:17.460 22383.065 - 22483.889: 99.3830% ( 2) 00:09:17.460 22483.889 - 22584.714: 99.3929% ( 2) 00:09:17.460 22584.714 - 22685.538: 99.4078% ( 3) 00:09:17.460 22685.538 - 22786.363: 99.4178% ( 2) 00:09:17.460 22786.363 - 22887.188: 99.4327% ( 3) 00:09:17.460 22887.188 - 22988.012: 99.4477% ( 3) 00:09:17.460 22988.012 - 23088.837: 99.4626% ( 3) 00:09:17.460 23088.837 - 23189.662: 99.4775% ( 3) 00:09:17.460 23189.662 - 23290.486: 99.4924% ( 3) 00:09:17.460 23290.486 - 23391.311: 99.5024% ( 2) 00:09:17.460 23391.311 - 23492.135: 99.5173% ( 3) 00:09:17.460 23492.135 - 23592.960: 99.5273% ( 2) 00:09:17.460 23592.960 - 23693.785: 99.5422% ( 3) 00:09:17.460 23693.785 - 23794.609: 99.5521% ( 2) 00:09:17.460 23794.609 - 23895.434: 99.5671% ( 3) 00:09:17.460 23895.434 - 23996.258: 99.5820% ( 3) 00:09:17.460 23996.258 - 24097.083: 99.5920% ( 2) 00:09:17.460 24097.083 - 24197.908: 99.6069% ( 3) 00:09:17.460 24197.908 - 24298.732: 99.6168% ( 2) 00:09:17.460 24298.732 - 24399.557: 99.6318% ( 3) 00:09:17.460 24399.557 - 24500.382: 99.6467% ( 3) 00:09:17.460 24500.382 - 24601.206: 99.6616% ( 3) 00:09:17.460 24601.206 - 24702.031: 99.6815% ( 4) 00:09:17.460 24702.031 - 24802.855: 99.6965% ( 3) 00:09:17.460 24802.855 - 24903.680: 99.7114% ( 3) 00:09:17.460 24903.680 - 25004.505: 99.7313% ( 4) 00:09:17.460 25004.505 - 25105.329: 99.7462% ( 3) 00:09:17.460 25105.329 - 25206.154: 99.7611% ( 3) 00:09:17.460 25206.154 - 25306.978: 99.7761% ( 3) 00:09:17.460 25306.978 - 25407.803: 99.7910% ( 3) 00:09:17.460 25407.803 - 25508.628: 99.8059% ( 3) 00:09:17.460 25508.628 - 25609.452: 99.8209% ( 3) 00:09:17.460 25609.452 - 25710.277: 99.8358% ( 3) 00:09:17.460 25710.277 - 25811.102: 99.8507% ( 3) 00:09:17.460 25811.102 - 26012.751: 99.8855% ( 7) 00:09:17.460 26012.751 - 26214.400: 99.9104% ( 5) 00:09:17.460 26214.400 - 26416.049: 99.9403% ( 6) 00:09:17.460 26416.049 - 26617.698: 99.9701% ( 6) 00:09:17.460 26617.698 - 26819.348: 100.0000% ( 6) 00:09:17.460 00:09:17.460 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:17.460 ============================================================================== 00:09:17.460 Range in us Cumulative IO count 00:09:17.460 4915.200 - 4940.406: 0.0050% ( 1) 00:09:17.460 5016.025 - 5041.231: 0.0149% ( 2) 00:09:17.460 5041.231 - 5066.437: 0.0299% ( 3) 00:09:17.460 5066.437 - 5091.643: 0.0498% ( 4) 00:09:17.460 5091.643 - 5116.849: 0.0647% ( 3) 00:09:17.461 5116.849 - 5142.055: 0.1045% ( 8) 00:09:17.461 5142.055 - 5167.262: 0.1294% ( 5) 00:09:17.461 5167.262 - 5192.468: 0.1891% ( 12) 00:09:17.461 5192.468 - 5217.674: 0.2737% ( 17) 00:09:17.461 5217.674 - 5242.880: 0.3583% ( 17) 00:09:17.461 5242.880 - 5268.086: 0.4976% ( 28) 00:09:17.461 5268.086 - 5293.292: 0.6469% ( 30) 00:09:17.461 5293.292 - 5318.498: 0.9455% ( 60) 00:09:17.461 5318.498 - 5343.705: 1.2490% ( 61) 00:09:17.461 5343.705 - 5368.911: 1.5675% ( 64) 00:09:17.461 5368.911 - 5394.117: 1.9855% ( 84) 00:09:17.461 5394.117 - 5419.323: 2.4134% ( 86) 00:09:17.461 5419.323 - 5444.529: 3.1449% ( 147) 00:09:17.461 5444.529 - 5469.735: 3.9212% ( 156) 00:09:17.461 5469.735 - 5494.942: 5.2050% ( 258) 00:09:17.461 5494.942 - 5520.148: 6.3545% ( 231) 00:09:17.461 5520.148 - 5545.354: 7.5836% ( 247) 00:09:17.461 5545.354 - 5570.560: 8.8674% ( 258) 00:09:17.461 5570.560 - 5595.766: 10.3802% ( 304) 00:09:17.461 5595.766 - 5620.972: 12.0422% ( 334) 00:09:17.461 5620.972 - 5646.178: 13.8336% ( 360) 00:09:17.461 5646.178 - 5671.385: 15.6947% ( 374) 00:09:17.461 5671.385 - 5696.591: 17.8195% ( 427) 00:09:17.461 5696.591 - 5721.797: 19.6357% ( 365) 00:09:17.461 5721.797 - 5747.003: 21.2928% ( 333) 00:09:17.461 5747.003 - 5772.209: 22.7359% ( 290) 00:09:17.461 5772.209 - 5797.415: 24.3680% ( 328) 00:09:17.461 5797.415 - 5822.622: 25.8260% ( 293) 00:09:17.461 5822.622 - 5847.828: 27.4035% ( 317) 00:09:17.461 5847.828 - 5873.034: 29.0953% ( 340) 00:09:17.461 5873.034 - 5898.240: 30.8668% ( 356) 00:09:17.461 5898.240 - 5923.446: 32.4990% ( 328) 00:09:17.461 5923.446 - 5948.652: 34.4098% ( 384) 00:09:17.461 5948.652 - 5973.858: 36.0768% ( 335) 00:09:17.461 5973.858 - 5999.065: 37.7588% ( 338) 00:09:17.461 5999.065 - 6024.271: 39.3611% ( 322) 00:09:17.461 6024.271 - 6049.477: 41.1624% ( 362) 00:09:17.461 6049.477 - 6074.683: 42.7846% ( 326) 00:09:17.461 6074.683 - 6099.889: 44.2128% ( 287) 00:09:17.461 6099.889 - 6125.095: 45.3822% ( 235) 00:09:17.461 6125.095 - 6150.302: 46.7854% ( 282) 00:09:17.461 6150.302 - 6175.508: 48.2783% ( 300) 00:09:17.461 6175.508 - 6200.714: 49.5870% ( 263) 00:09:17.461 6200.714 - 6225.920: 50.8360% ( 251) 00:09:17.461 6225.920 - 6251.126: 52.1945% ( 273) 00:09:17.461 6251.126 - 6276.332: 53.5878% ( 280) 00:09:17.461 6276.332 - 6301.538: 54.9363% ( 271) 00:09:17.461 6301.538 - 6326.745: 56.2500% ( 264) 00:09:17.461 6326.745 - 6351.951: 57.5786% ( 267) 00:09:17.461 6351.951 - 6377.157: 58.9570% ( 277) 00:09:17.461 6377.157 - 6402.363: 60.1861% ( 247) 00:09:17.461 6402.363 - 6427.569: 61.8332% ( 331) 00:09:17.461 6427.569 - 6452.775: 63.1967% ( 274) 00:09:17.461 6452.775 - 6503.188: 66.2122% ( 606) 00:09:17.461 6503.188 - 6553.600: 68.9341% ( 547) 00:09:17.461 6553.600 - 6604.012: 71.1684% ( 449) 00:09:17.461 6604.012 - 6654.425: 73.6515% ( 499) 00:09:17.461 6654.425 - 6704.837: 75.8907% ( 450) 00:09:17.461 6704.837 - 6755.249: 78.1001% ( 444) 00:09:17.461 6755.249 - 6805.662: 80.0707% ( 396) 00:09:17.461 6805.662 - 6856.074: 82.0412% ( 396) 00:09:17.461 6856.074 - 6906.486: 83.9072% ( 375) 00:09:17.461 6906.486 - 6956.898: 85.7832% ( 377) 00:09:17.461 6956.898 - 7007.311: 87.5050% ( 346) 00:09:17.461 7007.311 - 7057.723: 89.1222% ( 325) 00:09:17.461 7057.723 - 7108.135: 90.8340% ( 344) 00:09:17.461 7108.135 - 7158.548: 92.2671% ( 288) 00:09:17.461 7158.548 - 7208.960: 93.5957% ( 267) 00:09:17.461 7208.960 - 7259.372: 94.6158% ( 205) 00:09:17.461 7259.372 - 7309.785: 95.6359% ( 205) 00:09:17.461 7309.785 - 7360.197: 96.3027% ( 134) 00:09:17.461 7360.197 - 7410.609: 96.7805% ( 96) 00:09:17.461 7410.609 - 7461.022: 97.0840% ( 61) 00:09:17.461 7461.022 - 7511.434: 97.3079% ( 45) 00:09:17.461 7511.434 - 7561.846: 97.5368% ( 46) 00:09:17.461 7561.846 - 7612.258: 97.7309% ( 39) 00:09:17.461 7612.258 - 7662.671: 97.8752% ( 29) 00:09:17.461 7662.671 - 7713.083: 98.0046% ( 26) 00:09:17.461 7713.083 - 7763.495: 98.0941% ( 18) 00:09:17.461 7763.495 - 7813.908: 98.2036% ( 22) 00:09:17.461 7813.908 - 7864.320: 98.3031% ( 20) 00:09:17.461 7864.320 - 7914.732: 98.3828% ( 16) 00:09:17.461 7914.732 - 7965.145: 98.4524% ( 14) 00:09:17.461 7965.145 - 8015.557: 98.5121% ( 12) 00:09:17.461 8015.557 - 8065.969: 98.5569% ( 9) 00:09:17.461 8065.969 - 8116.382: 98.5868% ( 6) 00:09:17.461 8116.382 - 8166.794: 98.6216% ( 7) 00:09:17.461 8166.794 - 8217.206: 98.6465% ( 5) 00:09:17.461 8217.206 - 8267.618: 98.6813% ( 7) 00:09:17.461 8267.618 - 8318.031: 98.7062% ( 5) 00:09:17.461 8318.031 - 8368.443: 98.7261% ( 4) 00:09:17.461 8519.680 - 8570.092: 98.7361% ( 2) 00:09:17.461 8620.505 - 8670.917: 98.7510% ( 3) 00:09:17.461 8670.917 - 8721.329: 98.7560% ( 1) 00:09:17.461 8721.329 - 8771.742: 98.7759% ( 4) 00:09:17.461 8771.742 - 8822.154: 98.8008% ( 5) 00:09:17.461 8822.154 - 8872.566: 98.8207% ( 4) 00:09:17.461 8872.566 - 8922.978: 98.8306% ( 2) 00:09:17.461 8922.978 - 8973.391: 98.8555% ( 5) 00:09:17.461 9023.803 - 9074.215: 98.8754% ( 4) 00:09:17.461 9074.215 - 9124.628: 98.8854% ( 2) 00:09:17.461 9124.628 - 9175.040: 98.9202% ( 7) 00:09:17.461 9175.040 - 9225.452: 99.0695% ( 30) 00:09:17.461 9225.452 - 9275.865: 99.1640% ( 19) 00:09:17.461 9275.865 - 9326.277: 99.1889% ( 5) 00:09:17.461 9326.277 - 9376.689: 99.1988% ( 2) 00:09:17.461 9376.689 - 9427.102: 99.2038% ( 1) 00:09:17.461 9427.102 - 9477.514: 99.2088% ( 1) 00:09:17.461 9477.514 - 9527.926: 99.2188% ( 2) 00:09:17.461 9527.926 - 9578.338: 99.2237% ( 1) 00:09:17.461 9578.338 - 9628.751: 99.2387% ( 3) 00:09:17.461 9628.751 - 9679.163: 99.2486% ( 2) 00:09:17.461 9679.163 - 9729.575: 99.2536% ( 1) 00:09:17.461 9729.575 - 9779.988: 99.2586% ( 1) 00:09:17.461 9779.988 - 9830.400: 99.2635% ( 1) 00:09:17.461 9830.400 - 9880.812: 99.2685% ( 1) 00:09:17.461 9880.812 - 9931.225: 99.2735% ( 1) 00:09:17.461 9931.225 - 9981.637: 99.2785% ( 1) 00:09:17.461 9981.637 - 10032.049: 99.2834% ( 1) 00:09:17.461 10032.049 - 10082.462: 99.2884% ( 1) 00:09:17.461 10082.462 - 10132.874: 99.2934% ( 1) 00:09:17.461 10132.874 - 10183.286: 99.2984% ( 1) 00:09:17.461 10183.286 - 10233.698: 99.3033% ( 1) 00:09:17.461 10233.698 - 10284.111: 99.3083% ( 1) 00:09:17.461 10284.111 - 10334.523: 99.3133% ( 1) 00:09:17.461 10334.523 - 10384.935: 99.3183% ( 1) 00:09:17.461 10384.935 - 10435.348: 99.3232% ( 1) 00:09:17.461 10435.348 - 10485.760: 99.3282% ( 1) 00:09:17.461 10485.760 - 10536.172: 99.3332% ( 1) 00:09:17.461 10536.172 - 10586.585: 99.3382% ( 1) 00:09:17.461 10586.585 - 10636.997: 99.3432% ( 1) 00:09:17.461 10636.997 - 10687.409: 99.3531% ( 2) 00:09:17.461 10687.409 - 10737.822: 99.3581% ( 1) 00:09:17.461 10838.646 - 10889.058: 99.3631% ( 1) 00:09:17.461 22383.065 - 22483.889: 99.3780% ( 3) 00:09:17.461 22483.889 - 22584.714: 99.3879% ( 2) 00:09:17.461 22584.714 - 22685.538: 99.3929% ( 1) 00:09:17.461 22685.538 - 22786.363: 99.4078% ( 3) 00:09:17.461 22786.363 - 22887.188: 99.4178% ( 2) 00:09:17.461 22887.188 - 22988.012: 99.4277% ( 2) 00:09:17.461 22988.012 - 23088.837: 99.4377% ( 2) 00:09:17.461 23088.837 - 23189.662: 99.4526% ( 3) 00:09:17.461 23189.662 - 23290.486: 99.4676% ( 3) 00:09:17.461 23290.486 - 23391.311: 99.4775% ( 2) 00:09:17.461 23391.311 - 23492.135: 99.4924% ( 3) 00:09:17.461 23492.135 - 23592.960: 99.5074% ( 3) 00:09:17.461 23592.960 - 23693.785: 99.5123% ( 1) 00:09:17.461 23693.785 - 23794.609: 99.5322% ( 4) 00:09:17.461 23794.609 - 23895.434: 99.5422% ( 2) 00:09:17.461 23895.434 - 23996.258: 99.5571% ( 3) 00:09:17.461 23996.258 - 24097.083: 99.5721% ( 3) 00:09:17.461 24097.083 - 24197.908: 99.5920% ( 4) 00:09:17.461 24197.908 - 24298.732: 99.6019% ( 2) 00:09:17.461 24298.732 - 24399.557: 99.6168% ( 3) 00:09:17.461 24399.557 - 24500.382: 99.6318% ( 3) 00:09:17.461 24500.382 - 24601.206: 99.6417% ( 2) 00:09:17.461 24601.206 - 24702.031: 99.6616% ( 4) 00:09:17.461 24702.031 - 24802.855: 99.6766% ( 3) 00:09:17.461 24802.855 - 24903.680: 99.6915% ( 3) 00:09:17.461 24903.680 - 25004.505: 99.7064% ( 3) 00:09:17.461 25004.505 - 25105.329: 99.7213% ( 3) 00:09:17.461 25105.329 - 25206.154: 99.7363% ( 3) 00:09:17.461 25206.154 - 25306.978: 99.7512% ( 3) 00:09:17.461 25306.978 - 25407.803: 99.7661% ( 3) 00:09:17.461 25407.803 - 25508.628: 99.7860% ( 4) 00:09:17.461 25508.628 - 25609.452: 99.8010% ( 3) 00:09:17.461 25609.452 - 25710.277: 99.8159% ( 3) 00:09:17.461 25710.277 - 25811.102: 99.8308% ( 3) 00:09:17.461 25811.102 - 26012.751: 99.8706% ( 8) 00:09:17.461 26012.751 - 26214.400: 99.8955% ( 5) 00:09:17.461 26214.400 - 26416.049: 99.9303% ( 7) 00:09:17.461 26416.049 - 26617.698: 99.9602% ( 6) 00:09:17.461 26617.698 - 26819.348: 99.9950% ( 7) 00:09:17.461 26819.348 - 27020.997: 100.0000% ( 1) 00:09:17.461 00:09:17.461 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:17.461 ============================================================================== 00:09:17.461 Range in us Cumulative IO count 00:09:17.461 5066.437 - 5091.643: 0.0050% ( 1) 00:09:17.461 5091.643 - 5116.849: 0.0100% ( 1) 00:09:17.461 5116.849 - 5142.055: 0.0199% ( 2) 00:09:17.461 5142.055 - 5167.262: 0.0299% ( 2) 00:09:17.461 5192.468 - 5217.674: 0.0498% ( 4) 00:09:17.461 5217.674 - 5242.880: 0.0547% ( 1) 00:09:17.461 5242.880 - 5268.086: 0.0697% ( 3) 00:09:17.461 5268.086 - 5293.292: 0.0945% ( 5) 00:09:17.461 5293.292 - 5318.498: 0.1145% ( 4) 00:09:17.461 5318.498 - 5343.705: 0.1294% ( 3) 00:09:17.461 5343.705 - 5368.911: 0.1543% ( 5) 00:09:17.461 5368.911 - 5394.117: 0.1891% ( 7) 00:09:17.461 5394.117 - 5419.323: 0.2389% ( 10) 00:09:17.462 5419.323 - 5444.529: 0.2986% ( 12) 00:09:17.462 5444.529 - 5469.735: 0.3782% ( 16) 00:09:17.462 5469.735 - 5494.942: 0.4827% ( 21) 00:09:17.462 5494.942 - 5520.148: 0.6220% ( 28) 00:09:17.462 5520.148 - 5545.354: 0.7763% ( 31) 00:09:17.462 5545.354 - 5570.560: 1.0450% ( 54) 00:09:17.462 5570.560 - 5595.766: 1.4829% ( 88) 00:09:17.462 5595.766 - 5620.972: 1.9855% ( 101) 00:09:17.462 5620.972 - 5646.178: 2.5677% ( 117) 00:09:17.462 5646.178 - 5671.385: 3.2743% ( 142) 00:09:17.462 5671.385 - 5696.591: 4.1551% ( 177) 00:09:17.462 5696.591 - 5721.797: 5.1403% ( 198) 00:09:17.462 5721.797 - 5747.003: 6.1903% ( 211) 00:09:17.462 5747.003 - 5772.209: 7.3696% ( 237) 00:09:17.462 5772.209 - 5797.415: 8.7331% ( 274) 00:09:17.462 5797.415 - 5822.622: 10.2508% ( 305) 00:09:17.462 5822.622 - 5847.828: 11.8830% ( 328) 00:09:17.462 5847.828 - 5873.034: 13.4952% ( 324) 00:09:17.462 5873.034 - 5898.240: 15.2866% ( 360) 00:09:17.462 5898.240 - 5923.446: 17.1129% ( 367) 00:09:17.462 5923.446 - 5948.652: 19.2874% ( 437) 00:09:17.462 5948.652 - 5973.858: 21.5466% ( 454) 00:09:17.462 5973.858 - 5999.065: 23.9102% ( 475) 00:09:17.462 5999.065 - 6024.271: 26.3834% ( 497) 00:09:17.462 6024.271 - 6049.477: 29.1650% ( 559) 00:09:17.462 6049.477 - 6074.683: 32.2452% ( 619) 00:09:17.462 6074.683 - 6099.889: 35.1612% ( 586) 00:09:17.462 6099.889 - 6125.095: 38.5699% ( 685) 00:09:17.462 6125.095 - 6150.302: 41.8541% ( 660) 00:09:17.462 6150.302 - 6175.508: 45.3175% ( 696) 00:09:17.462 6175.508 - 6200.714: 48.0991% ( 559) 00:09:17.462 6200.714 - 6225.920: 51.1744% ( 618) 00:09:17.462 6225.920 - 6251.126: 54.7074% ( 710) 00:09:17.462 6251.126 - 6276.332: 57.6035% ( 582) 00:09:17.462 6276.332 - 6301.538: 60.1115% ( 504) 00:09:17.462 6301.538 - 6326.745: 63.0225% ( 585) 00:09:17.462 6326.745 - 6351.951: 65.5902% ( 516) 00:09:17.462 6351.951 - 6377.157: 67.4164% ( 367) 00:09:17.462 6377.157 - 6402.363: 69.1730% ( 353) 00:09:17.462 6402.363 - 6427.569: 71.1485% ( 397) 00:09:17.462 6427.569 - 6452.775: 73.1688% ( 406) 00:09:17.462 6452.775 - 6503.188: 76.1445% ( 598) 00:09:17.462 6503.188 - 6553.600: 79.0207% ( 578) 00:09:17.462 6553.600 - 6604.012: 81.3794% ( 474) 00:09:17.462 6604.012 - 6654.425: 83.7580% ( 478) 00:09:17.462 6654.425 - 6704.837: 86.0221% ( 455) 00:09:17.462 6704.837 - 6755.249: 87.9031% ( 378) 00:09:17.462 6755.249 - 6805.662: 89.7193% ( 365) 00:09:17.462 6805.662 - 6856.074: 91.3913% ( 336) 00:09:17.462 6856.074 - 6906.486: 92.7647% ( 276) 00:09:17.462 6906.486 - 6956.898: 93.8097% ( 210) 00:09:17.462 6956.898 - 7007.311: 94.6656% ( 172) 00:09:17.462 7007.311 - 7057.723: 95.4817% ( 164) 00:09:17.462 7057.723 - 7108.135: 96.1385% ( 132) 00:09:17.462 7108.135 - 7158.548: 96.5516% ( 83) 00:09:17.462 7158.548 - 7208.960: 96.9198% ( 74) 00:09:17.462 7208.960 - 7259.372: 97.2184% ( 60) 00:09:17.462 7259.372 - 7309.785: 97.5119% ( 59) 00:09:17.462 7309.785 - 7360.197: 97.7508% ( 48) 00:09:17.462 7360.197 - 7410.609: 97.8951% ( 29) 00:09:17.462 7410.609 - 7461.022: 98.0394% ( 29) 00:09:17.462 7461.022 - 7511.434: 98.1688% ( 26) 00:09:17.462 7511.434 - 7561.846: 98.2832% ( 23) 00:09:17.462 7561.846 - 7612.258: 98.3728% ( 18) 00:09:17.462 7612.258 - 7662.671: 98.4325% ( 12) 00:09:17.462 7662.671 - 7713.083: 98.4972% ( 13) 00:09:17.462 7713.083 - 7763.495: 98.5470% ( 10) 00:09:17.462 7763.495 - 7813.908: 98.5818% ( 7) 00:09:17.462 7813.908 - 7864.320: 98.6067% ( 5) 00:09:17.462 7864.320 - 7914.732: 98.6365% ( 6) 00:09:17.462 7914.732 - 7965.145: 98.6564% ( 4) 00:09:17.462 7965.145 - 8015.557: 98.6714% ( 3) 00:09:17.462 8015.557 - 8065.969: 98.6913% ( 4) 00:09:17.462 8065.969 - 8116.382: 98.7062% ( 3) 00:09:17.462 8116.382 - 8166.794: 98.7261% ( 4) 00:09:17.462 8620.505 - 8670.917: 98.7361% ( 2) 00:09:17.462 8670.917 - 8721.329: 98.7560% ( 4) 00:09:17.462 8721.329 - 8771.742: 98.7709% ( 3) 00:09:17.462 8771.742 - 8822.154: 98.7908% ( 4) 00:09:17.462 8822.154 - 8872.566: 98.8057% ( 3) 00:09:17.462 8872.566 - 8922.978: 98.8256% ( 4) 00:09:17.462 8922.978 - 8973.391: 98.8654% ( 8) 00:09:17.462 8973.391 - 9023.803: 98.9500% ( 17) 00:09:17.462 9023.803 - 9074.215: 99.0446% ( 19) 00:09:17.462 9074.215 - 9124.628: 99.1491% ( 21) 00:09:17.462 9124.628 - 9175.040: 99.2287% ( 16) 00:09:17.462 9175.040 - 9225.452: 99.2337% ( 1) 00:09:17.462 9225.452 - 9275.865: 99.2486% ( 3) 00:09:17.462 9275.865 - 9326.277: 99.2536% ( 1) 00:09:17.462 9326.277 - 9376.689: 99.2635% ( 2) 00:09:17.462 9376.689 - 9427.102: 99.2735% ( 2) 00:09:17.462 9427.102 - 9477.514: 99.2834% ( 2) 00:09:17.462 9477.514 - 9527.926: 99.2934% ( 2) 00:09:17.462 9527.926 - 9578.338: 99.3033% ( 2) 00:09:17.462 9578.338 - 9628.751: 99.3133% ( 2) 00:09:17.462 9628.751 - 9679.163: 99.3232% ( 2) 00:09:17.462 9679.163 - 9729.575: 99.3332% ( 2) 00:09:17.462 9729.575 - 9779.988: 99.3382% ( 1) 00:09:17.462 9779.988 - 9830.400: 99.3531% ( 3) 00:09:17.462 9830.400 - 9880.812: 99.3631% ( 2) 00:09:17.462 22887.188 - 22988.012: 99.4128% ( 10) 00:09:17.462 22988.012 - 23088.837: 99.4526% ( 8) 00:09:17.462 23088.837 - 23189.662: 99.4875% ( 7) 00:09:17.462 23189.662 - 23290.486: 99.5322% ( 9) 00:09:17.462 23290.486 - 23391.311: 99.5621% ( 6) 00:09:17.462 23391.311 - 23492.135: 99.6069% ( 9) 00:09:17.462 23492.135 - 23592.960: 99.6417% ( 7) 00:09:17.462 23592.960 - 23693.785: 99.6716% ( 6) 00:09:17.462 23693.785 - 23794.609: 99.6865% ( 3) 00:09:17.462 23794.609 - 23895.434: 99.7014% ( 3) 00:09:17.462 23895.434 - 23996.258: 99.7114% ( 2) 00:09:17.462 23996.258 - 24097.083: 99.7263% ( 3) 00:09:17.462 24097.083 - 24197.908: 99.7363% ( 2) 00:09:17.462 24197.908 - 24298.732: 99.7512% ( 3) 00:09:17.462 24298.732 - 24399.557: 99.7661% ( 3) 00:09:17.462 24399.557 - 24500.382: 99.7811% ( 3) 00:09:17.462 24500.382 - 24601.206: 99.7960% ( 3) 00:09:17.462 24601.206 - 24702.031: 99.8109% ( 3) 00:09:17.462 24702.031 - 24802.855: 99.8258% ( 3) 00:09:17.462 24802.855 - 24903.680: 99.8408% ( 3) 00:09:17.462 24903.680 - 25004.505: 99.8557% ( 3) 00:09:17.462 25004.505 - 25105.329: 99.8706% ( 3) 00:09:17.462 25105.329 - 25206.154: 99.8806% ( 2) 00:09:17.462 25206.154 - 25306.978: 99.8955% ( 3) 00:09:17.462 25306.978 - 25407.803: 99.9104% ( 3) 00:09:17.462 25407.803 - 25508.628: 99.9254% ( 3) 00:09:17.462 25508.628 - 25609.452: 99.9403% ( 3) 00:09:17.462 25609.452 - 25710.277: 99.9552% ( 3) 00:09:17.462 25710.277 - 25811.102: 99.9701% ( 3) 00:09:17.462 25811.102 - 26012.751: 100.0000% ( 6) 00:09:17.462 00:09:17.462 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:17.462 ============================================================================== 00:09:17.462 Range in us Cumulative IO count 00:09:17.462 5116.849 - 5142.055: 0.0100% ( 2) 00:09:17.462 5142.055 - 5167.262: 0.0299% ( 4) 00:09:17.462 5192.468 - 5217.674: 0.0398% ( 2) 00:09:17.462 5217.674 - 5242.880: 0.0448% ( 1) 00:09:17.462 5268.086 - 5293.292: 0.0647% ( 4) 00:09:17.462 5293.292 - 5318.498: 0.0697% ( 1) 00:09:17.462 5318.498 - 5343.705: 0.1045% ( 7) 00:09:17.462 5343.705 - 5368.911: 0.1294% ( 5) 00:09:17.462 5368.911 - 5394.117: 0.1592% ( 6) 00:09:17.462 5394.117 - 5419.323: 0.2189% ( 12) 00:09:17.462 5419.323 - 5444.529: 0.2836% ( 13) 00:09:17.462 5444.529 - 5469.735: 0.3434% ( 12) 00:09:17.462 5469.735 - 5494.942: 0.5275% ( 37) 00:09:17.462 5494.942 - 5520.148: 0.7862% ( 52) 00:09:17.462 5520.148 - 5545.354: 1.1992% ( 83) 00:09:17.462 5545.354 - 5570.560: 1.6670% ( 94) 00:09:17.462 5570.560 - 5595.766: 2.2442% ( 116) 00:09:17.462 5595.766 - 5620.972: 2.9608% ( 144) 00:09:17.462 5620.972 - 5646.178: 3.9262% ( 194) 00:09:17.462 5646.178 - 5671.385: 4.8467% ( 185) 00:09:17.462 5671.385 - 5696.591: 5.7822% ( 188) 00:09:17.462 5696.591 - 5721.797: 6.8869% ( 222) 00:09:17.462 5721.797 - 5747.003: 8.1359% ( 251) 00:09:17.462 5747.003 - 5772.209: 9.4546% ( 265) 00:09:17.462 5772.209 - 5797.415: 10.8031% ( 271) 00:09:17.462 5797.415 - 5822.622: 12.2811% ( 297) 00:09:17.462 5822.622 - 5847.828: 13.7888% ( 303) 00:09:17.462 5847.828 - 5873.034: 15.4608% ( 336) 00:09:17.462 5873.034 - 5898.240: 17.1427% ( 338) 00:09:17.462 5898.240 - 5923.446: 18.9043% ( 354) 00:09:17.462 5923.446 - 5948.652: 21.0092% ( 423) 00:09:17.462 5948.652 - 5973.858: 23.1190% ( 424) 00:09:17.462 5973.858 - 5999.065: 25.8509% ( 549) 00:09:17.462 5999.065 - 6024.271: 27.9707% ( 426) 00:09:17.462 6024.271 - 6049.477: 30.7026% ( 549) 00:09:17.462 6049.477 - 6074.683: 34.1113% ( 685) 00:09:17.462 6074.683 - 6099.889: 36.8929% ( 559) 00:09:17.462 6099.889 - 6125.095: 39.7193% ( 568) 00:09:17.462 6125.095 - 6150.302: 42.4711% ( 553) 00:09:17.462 6150.302 - 6175.508: 45.2528% ( 559) 00:09:17.462 6175.508 - 6200.714: 48.2932% ( 611) 00:09:17.462 6200.714 - 6225.920: 51.2241% ( 589) 00:09:17.462 6225.920 - 6251.126: 53.6624% ( 490) 00:09:17.462 6251.126 - 6276.332: 56.3645% ( 543) 00:09:17.462 6276.332 - 6301.538: 58.7381% ( 477) 00:09:17.462 6301.538 - 6326.745: 61.1067% ( 476) 00:09:17.462 6326.745 - 6351.951: 63.6694% ( 515) 00:09:17.462 6351.951 - 6377.157: 65.7743% ( 423) 00:09:17.462 6377.157 - 6402.363: 67.8543% ( 418) 00:09:17.462 6402.363 - 6427.569: 69.7054% ( 372) 00:09:17.462 6427.569 - 6452.775: 71.5665% ( 374) 00:09:17.462 6452.775 - 6503.188: 74.6865% ( 627) 00:09:17.462 6503.188 - 6553.600: 77.4781% ( 561) 00:09:17.462 6553.600 - 6604.012: 79.8617% ( 479) 00:09:17.462 6604.012 - 6654.425: 82.3248% ( 495) 00:09:17.462 6654.425 - 6704.837: 84.6039% ( 458) 00:09:17.462 6704.837 - 6755.249: 86.8232% ( 446) 00:09:17.462 6755.249 - 6805.662: 88.7689% ( 391) 00:09:17.462 6805.662 - 6856.074: 90.5752% ( 363) 00:09:17.462 6856.074 - 6906.486: 92.2174% ( 330) 00:09:17.462 6906.486 - 6956.898: 93.6007% ( 278) 00:09:17.462 6956.898 - 7007.311: 94.6805% ( 217) 00:09:17.463 7007.311 - 7057.723: 95.4270% ( 150) 00:09:17.463 7057.723 - 7108.135: 95.9992% ( 115) 00:09:17.463 7108.135 - 7158.548: 96.3923% ( 79) 00:09:17.463 7158.548 - 7208.960: 96.7108% ( 64) 00:09:17.463 7208.960 - 7259.372: 96.9745% ( 53) 00:09:17.463 7259.372 - 7309.785: 97.1984% ( 45) 00:09:17.463 7309.785 - 7360.197: 97.3477% ( 30) 00:09:17.463 7360.197 - 7410.609: 97.6115% ( 53) 00:09:17.463 7410.609 - 7461.022: 97.7359% ( 25) 00:09:17.463 7461.022 - 7511.434: 97.8453% ( 22) 00:09:17.463 7511.434 - 7561.846: 98.0543% ( 42) 00:09:17.463 7561.846 - 7612.258: 98.2832% ( 46) 00:09:17.463 7612.258 - 7662.671: 98.4624% ( 36) 00:09:17.463 7662.671 - 7713.083: 98.5121% ( 10) 00:09:17.463 7713.083 - 7763.495: 98.5669% ( 11) 00:09:17.463 7763.495 - 7813.908: 98.6117% ( 9) 00:09:17.463 7813.908 - 7864.320: 98.6465% ( 7) 00:09:17.463 7864.320 - 7914.732: 98.6714% ( 5) 00:09:17.463 7914.732 - 7965.145: 98.7012% ( 6) 00:09:17.463 7965.145 - 8015.557: 98.7261% ( 5) 00:09:17.463 8015.557 - 8065.969: 98.7410% ( 3) 00:09:17.463 8166.794 - 8217.206: 98.7460% ( 1) 00:09:17.463 8418.855 - 8469.268: 98.8008% ( 11) 00:09:17.463 8469.268 - 8519.680: 98.8953% ( 19) 00:09:17.463 8519.680 - 8570.092: 98.9799% ( 17) 00:09:17.463 8570.092 - 8620.505: 99.0496% ( 14) 00:09:17.463 8620.505 - 8670.917: 99.1093% ( 12) 00:09:17.463 8670.917 - 8721.329: 99.1839% ( 15) 00:09:17.463 8721.329 - 8771.742: 99.2088% ( 5) 00:09:17.463 8771.742 - 8822.154: 99.2237% ( 3) 00:09:17.463 8822.154 - 8872.566: 99.2337% ( 2) 00:09:17.463 8872.566 - 8922.978: 99.2436% ( 2) 00:09:17.463 8922.978 - 8973.391: 99.2586% ( 3) 00:09:17.463 8973.391 - 9023.803: 99.2685% ( 2) 00:09:17.463 9023.803 - 9074.215: 99.2834% ( 3) 00:09:17.463 9074.215 - 9124.628: 99.2934% ( 2) 00:09:17.463 9124.628 - 9175.040: 99.3083% ( 3) 00:09:17.463 9175.040 - 9225.452: 99.3183% ( 2) 00:09:17.463 9225.452 - 9275.865: 99.3282% ( 2) 00:09:17.463 9275.865 - 9326.277: 99.3432% ( 3) 00:09:17.463 9326.277 - 9376.689: 99.3531% ( 2) 00:09:17.463 9376.689 - 9427.102: 99.3631% ( 2) 00:09:17.463 21979.766 - 22080.591: 99.3680% ( 1) 00:09:17.463 22685.538 - 22786.363: 99.3879% ( 4) 00:09:17.463 22786.363 - 22887.188: 99.4178% ( 6) 00:09:17.463 22887.188 - 22988.012: 99.4526% ( 7) 00:09:17.463 22988.012 - 23088.837: 99.4825% ( 6) 00:09:17.463 23088.837 - 23189.662: 99.5472% ( 13) 00:09:17.463 23189.662 - 23290.486: 99.6417% ( 19) 00:09:17.463 23290.486 - 23391.311: 99.7363% ( 19) 00:09:17.463 23391.311 - 23492.135: 99.7761% ( 8) 00:09:17.463 23492.135 - 23592.960: 99.7910% ( 3) 00:09:17.463 23592.960 - 23693.785: 99.8059% ( 3) 00:09:17.463 23693.785 - 23794.609: 99.8209% ( 3) 00:09:17.463 23794.609 - 23895.434: 99.8358% ( 3) 00:09:17.463 23895.434 - 23996.258: 99.8507% ( 3) 00:09:17.463 23996.258 - 24097.083: 99.8656% ( 3) 00:09:17.463 24097.083 - 24197.908: 99.8806% ( 3) 00:09:17.463 24197.908 - 24298.732: 99.8955% ( 3) 00:09:17.463 24298.732 - 24399.557: 99.9104% ( 3) 00:09:17.463 24399.557 - 24500.382: 99.9204% ( 2) 00:09:17.463 24500.382 - 24601.206: 99.9353% ( 3) 00:09:17.463 24601.206 - 24702.031: 99.9502% ( 3) 00:09:17.463 24702.031 - 24802.855: 99.9652% ( 3) 00:09:17.463 24802.855 - 24903.680: 99.9801% ( 3) 00:09:17.463 24903.680 - 25004.505: 99.9950% ( 3) 00:09:17.463 25004.505 - 25105.329: 100.0000% ( 1) 00:09:17.463 00:09:17.463 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:17.463 ============================================================================== 00:09:17.463 Range in us Cumulative IO count 00:09:17.463 5041.231 - 5066.437: 0.0050% ( 1) 00:09:17.463 5091.643 - 5116.849: 0.0100% ( 1) 00:09:17.463 5116.849 - 5142.055: 0.0199% ( 2) 00:09:17.463 5142.055 - 5167.262: 0.0299% ( 2) 00:09:17.463 5167.262 - 5192.468: 0.0448% ( 3) 00:09:17.463 5192.468 - 5217.674: 0.0597% ( 3) 00:09:17.463 5217.674 - 5242.880: 0.0697% ( 2) 00:09:17.463 5242.880 - 5268.086: 0.1045% ( 7) 00:09:17.463 5268.086 - 5293.292: 0.1393% ( 7) 00:09:17.463 5293.292 - 5318.498: 0.1642% ( 5) 00:09:17.463 5318.498 - 5343.705: 0.1941% ( 6) 00:09:17.463 5343.705 - 5368.911: 0.2289% ( 7) 00:09:17.463 5368.911 - 5394.117: 0.2787% ( 10) 00:09:17.463 5394.117 - 5419.323: 0.3035% ( 5) 00:09:17.463 5419.323 - 5444.529: 0.3633% ( 12) 00:09:17.463 5444.529 - 5469.735: 0.4976% ( 27) 00:09:17.463 5469.735 - 5494.942: 0.6568% ( 32) 00:09:17.463 5494.942 - 5520.148: 0.9156% ( 52) 00:09:17.463 5520.148 - 5545.354: 1.2042% ( 58) 00:09:17.463 5545.354 - 5570.560: 1.6023% ( 80) 00:09:17.463 5570.560 - 5595.766: 2.0551% ( 91) 00:09:17.463 5595.766 - 5620.972: 2.6971% ( 129) 00:09:17.463 5620.972 - 5646.178: 3.3290% ( 127) 00:09:17.463 5646.178 - 5671.385: 4.0854% ( 152) 00:09:17.463 5671.385 - 5696.591: 5.0259% ( 189) 00:09:17.463 5696.591 - 5721.797: 6.1007% ( 216) 00:09:17.463 5721.797 - 5747.003: 7.1805% ( 217) 00:09:17.463 5747.003 - 5772.209: 8.4942% ( 264) 00:09:17.463 5772.209 - 5797.415: 9.8875% ( 280) 00:09:17.463 5797.415 - 5822.622: 11.4003% ( 304) 00:09:17.463 5822.622 - 5847.828: 12.9628% ( 314) 00:09:17.463 5847.828 - 5873.034: 14.6547% ( 340) 00:09:17.463 5873.034 - 5898.240: 16.4162% ( 354) 00:09:17.463 5898.240 - 5923.446: 18.3171% ( 382) 00:09:17.463 5923.446 - 5948.652: 20.5514% ( 449) 00:09:17.463 5948.652 - 5973.858: 22.7110% ( 434) 00:09:17.463 5973.858 - 5999.065: 25.4031% ( 541) 00:09:17.463 5999.065 - 6024.271: 28.2096% ( 564) 00:09:17.463 6024.271 - 6049.477: 30.8619% ( 533) 00:09:17.463 6049.477 - 6074.683: 34.1113% ( 653) 00:09:17.463 6074.683 - 6099.889: 36.8780% ( 556) 00:09:17.463 6099.889 - 6125.095: 39.6248% ( 552) 00:09:17.463 6125.095 - 6150.302: 42.5408% ( 586) 00:09:17.463 6150.302 - 6175.508: 45.1980% ( 534) 00:09:17.463 6175.508 - 6200.714: 48.4524% ( 654) 00:09:17.463 6200.714 - 6225.920: 51.8710% ( 687) 00:09:17.463 6225.920 - 6251.126: 54.7174% ( 572) 00:09:17.463 6251.126 - 6276.332: 57.3945% ( 538) 00:09:17.463 6276.332 - 6301.538: 60.0368% ( 531) 00:09:17.463 6301.538 - 6326.745: 62.5299% ( 501) 00:09:17.463 6326.745 - 6351.951: 64.8139% ( 459) 00:09:17.463 6351.951 - 6377.157: 66.9337% ( 426) 00:09:17.463 6377.157 - 6402.363: 69.0834% ( 432) 00:09:17.463 6402.363 - 6427.569: 71.0589% ( 397) 00:09:17.463 6427.569 - 6452.775: 72.5866% ( 307) 00:09:17.463 6452.775 - 6503.188: 75.1543% ( 516) 00:09:17.463 6503.188 - 6553.600: 77.7468% ( 521) 00:09:17.463 6553.600 - 6604.012: 80.1005% ( 473) 00:09:17.463 6604.012 - 6654.425: 82.4940% ( 481) 00:09:17.463 6654.425 - 6704.837: 84.6736% ( 438) 00:09:17.463 6704.837 - 6755.249: 86.7834% ( 424) 00:09:17.463 6755.249 - 6805.662: 89.1770% ( 481) 00:09:17.463 6805.662 - 6856.074: 90.8539% ( 337) 00:09:17.463 6856.074 - 6906.486: 92.4562% ( 322) 00:09:17.463 6906.486 - 6956.898: 93.7351% ( 257) 00:09:17.463 6956.898 - 7007.311: 94.7104% ( 196) 00:09:17.463 7007.311 - 7057.723: 95.4120% ( 141) 00:09:17.463 7057.723 - 7108.135: 95.8449% ( 87) 00:09:17.463 7108.135 - 7158.548: 96.1535% ( 62) 00:09:17.463 7158.548 - 7208.960: 96.4769% ( 65) 00:09:17.463 7208.960 - 7259.372: 96.7705% ( 59) 00:09:17.463 7259.372 - 7309.785: 96.9546% ( 37) 00:09:17.463 7309.785 - 7360.197: 97.1686% ( 43) 00:09:17.463 7360.197 - 7410.609: 97.3229% ( 31) 00:09:17.463 7410.609 - 7461.022: 97.4771% ( 31) 00:09:17.463 7461.022 - 7511.434: 97.6214% ( 29) 00:09:17.463 7511.434 - 7561.846: 97.7458% ( 25) 00:09:17.463 7561.846 - 7612.258: 97.8802% ( 27) 00:09:17.463 7612.258 - 7662.671: 97.9449% ( 13) 00:09:17.463 7662.671 - 7713.083: 97.9996% ( 11) 00:09:17.463 7713.083 - 7763.495: 98.0543% ( 11) 00:09:17.463 7763.495 - 7813.908: 98.0941% ( 8) 00:09:17.463 7813.908 - 7864.320: 98.1539% ( 12) 00:09:17.463 7864.320 - 7914.732: 98.2982% ( 29) 00:09:17.463 7914.732 - 7965.145: 98.4226% ( 25) 00:09:17.463 7965.145 - 8015.557: 98.5768% ( 31) 00:09:17.463 8015.557 - 8065.969: 98.6764% ( 20) 00:09:17.463 8065.969 - 8116.382: 98.7709% ( 19) 00:09:17.463 8116.382 - 8166.794: 98.8605% ( 18) 00:09:17.463 8166.794 - 8217.206: 99.0396% ( 36) 00:09:17.463 8217.206 - 8267.618: 99.1242% ( 17) 00:09:17.463 8267.618 - 8318.031: 99.1640% ( 8) 00:09:17.463 8318.031 - 8368.443: 99.1839% ( 4) 00:09:17.463 8368.443 - 8418.855: 99.2088% ( 5) 00:09:17.463 8418.855 - 8469.268: 99.2337% ( 5) 00:09:17.463 8469.268 - 8519.680: 99.2536% ( 4) 00:09:17.463 8519.680 - 8570.092: 99.2635% ( 2) 00:09:17.463 8570.092 - 8620.505: 99.2735% ( 2) 00:09:17.463 8620.505 - 8670.917: 99.2884% ( 3) 00:09:17.463 8670.917 - 8721.329: 99.2984% ( 2) 00:09:17.463 8721.329 - 8771.742: 99.3133% ( 3) 00:09:17.463 8771.742 - 8822.154: 99.3232% ( 2) 00:09:17.463 8822.154 - 8872.566: 99.3382% ( 3) 00:09:17.463 8872.566 - 8922.978: 99.3481% ( 2) 00:09:17.463 8922.978 - 8973.391: 99.3581% ( 2) 00:09:17.463 8973.391 - 9023.803: 99.3631% ( 1) 00:09:17.463 21374.818 - 21475.643: 99.3680% ( 1) 00:09:17.463 21878.942 - 21979.766: 99.3879% ( 4) 00:09:17.463 21979.766 - 22080.591: 99.4277% ( 8) 00:09:17.463 22080.591 - 22181.415: 99.4676% ( 8) 00:09:17.463 22181.415 - 22282.240: 99.5173% ( 10) 00:09:17.463 22282.240 - 22383.065: 99.6168% ( 20) 00:09:17.463 22383.065 - 22483.889: 99.6815% ( 13) 00:09:17.463 22483.889 - 22584.714: 99.7114% ( 6) 00:09:17.463 22584.714 - 22685.538: 99.7263% ( 3) 00:09:17.463 22685.538 - 22786.363: 99.7462% ( 4) 00:09:17.463 22786.363 - 22887.188: 99.7611% ( 3) 00:09:17.463 22887.188 - 22988.012: 99.7811% ( 4) 00:09:17.463 22988.012 - 23088.837: 99.8010% ( 4) 00:09:17.463 23088.837 - 23189.662: 99.8258% ( 5) 00:09:17.463 23189.662 - 23290.486: 99.8358% ( 2) 00:09:17.463 23290.486 - 23391.311: 99.8607% ( 5) 00:09:17.463 23391.311 - 23492.135: 99.8756% ( 3) 00:09:17.463 23492.135 - 23592.960: 99.8905% ( 3) 00:09:17.463 23592.960 - 23693.785: 99.9055% ( 3) 00:09:17.463 23693.785 - 23794.609: 99.9204% ( 3) 00:09:17.464 23794.609 - 23895.434: 99.9353% ( 3) 00:09:17.464 23895.434 - 23996.258: 99.9502% ( 3) 00:09:17.464 23996.258 - 24097.083: 99.9652% ( 3) 00:09:17.464 24097.083 - 24197.908: 99.9751% ( 2) 00:09:17.464 24197.908 - 24298.732: 99.9900% ( 3) 00:09:17.464 24298.732 - 24399.557: 100.0000% ( 2) 00:09:17.464 00:09:17.464 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:17.464 ============================================================================== 00:09:17.464 Range in us Cumulative IO count 00:09:17.464 5116.849 - 5142.055: 0.0199% ( 4) 00:09:17.464 5142.055 - 5167.262: 0.0249% ( 1) 00:09:17.464 5167.262 - 5192.468: 0.0348% ( 2) 00:09:17.464 5192.468 - 5217.674: 0.0398% ( 1) 00:09:17.464 5217.674 - 5242.880: 0.0498% ( 2) 00:09:17.464 5242.880 - 5268.086: 0.0697% ( 4) 00:09:17.464 5268.086 - 5293.292: 0.0896% ( 4) 00:09:17.464 5293.292 - 5318.498: 0.1194% ( 6) 00:09:17.464 5318.498 - 5343.705: 0.1393% ( 4) 00:09:17.464 5343.705 - 5368.911: 0.1692% ( 6) 00:09:17.464 5368.911 - 5394.117: 0.2040% ( 7) 00:09:17.464 5394.117 - 5419.323: 0.2438% ( 8) 00:09:17.464 5419.323 - 5444.529: 0.2986% ( 11) 00:09:17.464 5444.529 - 5469.735: 0.3881% ( 18) 00:09:17.464 5469.735 - 5494.942: 0.5275% ( 28) 00:09:17.464 5494.942 - 5520.148: 0.7962% ( 54) 00:09:17.464 5520.148 - 5545.354: 1.1346% ( 68) 00:09:17.464 5545.354 - 5570.560: 1.6172% ( 97) 00:09:17.464 5570.560 - 5595.766: 2.0949% ( 96) 00:09:17.464 5595.766 - 5620.972: 2.6771% ( 117) 00:09:17.464 5620.972 - 5646.178: 3.3340% ( 132) 00:09:17.464 5646.178 - 5671.385: 4.2197% ( 178) 00:09:17.464 5671.385 - 5696.591: 5.1752% ( 192) 00:09:17.464 5696.591 - 5721.797: 6.2649% ( 219) 00:09:17.464 5721.797 - 5747.003: 7.4343% ( 235) 00:09:17.464 5747.003 - 5772.209: 8.6833% ( 251) 00:09:17.464 5772.209 - 5797.415: 10.0368% ( 272) 00:09:17.464 5797.415 - 5822.622: 11.6143% ( 317) 00:09:17.464 5822.622 - 5847.828: 13.1568% ( 310) 00:09:17.464 5847.828 - 5873.034: 14.8587% ( 342) 00:09:17.464 5873.034 - 5898.240: 16.6103% ( 352) 00:09:17.464 5898.240 - 5923.446: 18.8097% ( 442) 00:09:17.464 5923.446 - 5948.652: 20.8897% ( 418) 00:09:17.464 5948.652 - 5973.858: 23.2434% ( 473) 00:09:17.464 5973.858 - 5999.065: 25.5922% ( 472) 00:09:17.464 5999.065 - 6024.271: 27.9658% ( 477) 00:09:17.464 6024.271 - 6049.477: 30.8221% ( 574) 00:09:17.464 6049.477 - 6074.683: 33.4743% ( 533) 00:09:17.464 6074.683 - 6099.889: 35.9226% ( 492) 00:09:17.464 6099.889 - 6125.095: 38.9281% ( 604) 00:09:17.464 6125.095 - 6150.302: 41.5705% ( 531) 00:09:17.464 6150.302 - 6175.508: 44.8248% ( 654) 00:09:17.464 6175.508 - 6200.714: 48.1190% ( 662) 00:09:17.464 6200.714 - 6225.920: 51.3485% ( 649) 00:09:17.464 6225.920 - 6251.126: 54.3889% ( 611) 00:09:17.464 6251.126 - 6276.332: 57.0760% ( 540) 00:09:17.464 6276.332 - 6301.538: 60.3105% ( 650) 00:09:17.464 6301.538 - 6326.745: 62.6841% ( 477) 00:09:17.464 6326.745 - 6351.951: 64.9035% ( 446) 00:09:17.464 6351.951 - 6377.157: 66.6750% ( 356) 00:09:17.464 6377.157 - 6402.363: 68.9043% ( 448) 00:09:17.464 6402.363 - 6427.569: 70.3623% ( 293) 00:09:17.464 6427.569 - 6452.775: 72.1537% ( 360) 00:09:17.464 6452.775 - 6503.188: 75.0100% ( 574) 00:09:17.464 6503.188 - 6553.600: 78.0255% ( 606) 00:09:17.464 6553.600 - 6604.012: 80.4041% ( 478) 00:09:17.464 6604.012 - 6654.425: 82.8772% ( 497) 00:09:17.464 6654.425 - 6704.837: 85.1911% ( 465) 00:09:17.464 6704.837 - 6755.249: 87.3706% ( 438) 00:09:17.464 6755.249 - 6805.662: 89.7890% ( 486) 00:09:17.464 6805.662 - 6856.074: 91.4709% ( 338) 00:09:17.464 6856.074 - 6906.486: 92.8643% ( 280) 00:09:17.464 6906.486 - 6956.898: 93.9889% ( 226) 00:09:17.464 6956.898 - 7007.311: 94.9094% ( 185) 00:09:17.464 7007.311 - 7057.723: 95.5563% ( 130) 00:09:17.464 7057.723 - 7108.135: 95.9743% ( 84) 00:09:17.464 7108.135 - 7158.548: 96.3873% ( 83) 00:09:17.464 7158.548 - 7208.960: 96.6312% ( 49) 00:09:17.464 7208.960 - 7259.372: 96.8352% ( 41) 00:09:17.464 7259.372 - 7309.785: 97.0193% ( 37) 00:09:17.464 7309.785 - 7360.197: 97.1984% ( 36) 00:09:17.464 7360.197 - 7410.609: 97.3527% ( 31) 00:09:17.464 7410.609 - 7461.022: 97.5169% ( 33) 00:09:17.464 7461.022 - 7511.434: 97.6712% ( 31) 00:09:17.464 7511.434 - 7561.846: 97.8354% ( 33) 00:09:17.464 7561.846 - 7612.258: 98.0096% ( 35) 00:09:17.464 7612.258 - 7662.671: 98.1638% ( 31) 00:09:17.464 7662.671 - 7713.083: 98.2932% ( 26) 00:09:17.464 7713.083 - 7763.495: 98.3828% ( 18) 00:09:17.464 7763.495 - 7813.908: 98.4425% ( 12) 00:09:17.464 7813.908 - 7864.320: 98.4873% ( 9) 00:09:17.464 7864.320 - 7914.732: 98.5370% ( 10) 00:09:17.464 7914.732 - 7965.145: 98.5719% ( 7) 00:09:17.464 7965.145 - 8015.557: 98.6266% ( 11) 00:09:17.464 8015.557 - 8065.969: 98.6664% ( 8) 00:09:17.464 8065.969 - 8116.382: 98.7012% ( 7) 00:09:17.464 8116.382 - 8166.794: 98.7410% ( 8) 00:09:17.464 8166.794 - 8217.206: 98.7908% ( 10) 00:09:17.464 8217.206 - 8267.618: 98.8306% ( 8) 00:09:17.464 8267.618 - 8318.031: 98.8854% ( 11) 00:09:17.464 8318.031 - 8368.443: 98.9252% ( 8) 00:09:17.464 8368.443 - 8418.855: 98.9699% ( 9) 00:09:17.464 8418.855 - 8469.268: 99.0346% ( 13) 00:09:17.464 8469.268 - 8519.680: 99.0894% ( 11) 00:09:17.464 8519.680 - 8570.092: 99.1391% ( 10) 00:09:17.464 8570.092 - 8620.505: 99.1640% ( 5) 00:09:17.464 8620.505 - 8670.917: 99.1789% ( 3) 00:09:17.464 8670.917 - 8721.329: 99.1939% ( 3) 00:09:17.464 8721.329 - 8771.742: 99.2038% ( 2) 00:09:17.464 8771.742 - 8822.154: 99.2188% ( 3) 00:09:17.464 8822.154 - 8872.566: 99.2337% ( 3) 00:09:17.464 8872.566 - 8922.978: 99.2486% ( 3) 00:09:17.464 8922.978 - 8973.391: 99.2635% ( 3) 00:09:17.464 8973.391 - 9023.803: 99.2785% ( 3) 00:09:17.464 9023.803 - 9074.215: 99.2934% ( 3) 00:09:17.464 9074.215 - 9124.628: 99.3083% ( 3) 00:09:17.464 9124.628 - 9175.040: 99.3232% ( 3) 00:09:17.464 9175.040 - 9225.452: 99.3382% ( 3) 00:09:17.464 9225.452 - 9275.865: 99.3531% ( 3) 00:09:17.464 9275.865 - 9326.277: 99.3631% ( 2) 00:09:17.464 20366.572 - 20467.397: 99.3680% ( 1) 00:09:17.464 21273.994 - 21374.818: 99.3879% ( 4) 00:09:17.464 21374.818 - 21475.643: 99.4228% ( 7) 00:09:17.464 21475.643 - 21576.468: 99.4576% ( 7) 00:09:17.464 21576.468 - 21677.292: 99.5123% ( 11) 00:09:17.464 21677.292 - 21778.117: 99.6168% ( 21) 00:09:17.464 21778.117 - 21878.942: 99.7164% ( 20) 00:09:17.464 21878.942 - 21979.766: 99.7711% ( 11) 00:09:17.464 21979.766 - 22080.591: 99.8010% ( 6) 00:09:17.464 22080.591 - 22181.415: 99.8159% ( 3) 00:09:17.464 22181.415 - 22282.240: 99.8308% ( 3) 00:09:17.464 22282.240 - 22383.065: 99.8457% ( 3) 00:09:17.464 22383.065 - 22483.889: 99.8557% ( 2) 00:09:17.464 22483.889 - 22584.714: 99.8706% ( 3) 00:09:17.464 22584.714 - 22685.538: 99.8855% ( 3) 00:09:17.464 22685.538 - 22786.363: 99.9005% ( 3) 00:09:17.464 22786.363 - 22887.188: 99.9154% ( 3) 00:09:17.464 22887.188 - 22988.012: 99.9303% ( 3) 00:09:17.464 22988.012 - 23088.837: 99.9403% ( 2) 00:09:17.464 23088.837 - 23189.662: 99.9552% ( 3) 00:09:17.464 23189.662 - 23290.486: 99.9701% ( 3) 00:09:17.464 23290.486 - 23391.311: 99.9851% ( 3) 00:09:17.464 23391.311 - 23492.135: 100.0000% ( 3) 00:09:17.464 00:09:17.464 13:13:31 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:17.464 00:09:17.464 real 0m2.616s 00:09:17.464 user 0m2.315s 00:09:17.464 sys 0m0.185s 00:09:17.464 13:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.464 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:09:17.464 ************************************ 00:09:17.464 END TEST nvme_perf 00:09:17.464 ************************************ 00:09:17.464 13:13:31 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:17.464 13:13:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:17.464 13:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.464 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:09:17.464 ************************************ 00:09:17.464 START TEST nvme_hello_world 00:09:17.464 ************************************ 00:09:17.464 13:13:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:17.464 Initializing NVMe Controllers 00:09:17.464 Attached to 0000:00:09.0 00:09:17.464 Namespace ID: 1 size: 1GB 00:09:17.464 Attached to 0000:00:06.0 00:09:17.465 Namespace ID: 1 size: 6GB 00:09:17.465 Attached to 0000:00:07.0 00:09:17.465 Namespace ID: 1 size: 5GB 00:09:17.465 Attached to 0000:00:08.0 00:09:17.465 Namespace ID: 1 size: 4GB 00:09:17.465 Namespace ID: 2 size: 4GB 00:09:17.465 Namespace ID: 3 size: 4GB 00:09:17.465 Initialization complete. 00:09:17.465 INFO: using host memory buffer for IO 00:09:17.465 Hello world! 00:09:17.465 INFO: using host memory buffer for IO 00:09:17.465 Hello world! 00:09:17.465 INFO: using host memory buffer for IO 00:09:17.465 Hello world! 00:09:17.465 INFO: using host memory buffer for IO 00:09:17.465 Hello world! 00:09:17.465 INFO: using host memory buffer for IO 00:09:17.465 Hello world! 00:09:17.465 INFO: using host memory buffer for IO 00:09:17.465 Hello world! 00:09:17.465 00:09:17.465 real 0m0.257s 00:09:17.465 user 0m0.119s 00:09:17.465 sys 0m0.092s 00:09:17.465 13:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.465 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:09:17.465 ************************************ 00:09:17.465 END TEST nvme_hello_world 00:09:17.465 ************************************ 00:09:17.465 13:13:31 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:17.465 13:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.465 13:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.465 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:09:17.465 ************************************ 00:09:17.465 START TEST nvme_sgl 00:09:17.465 ************************************ 00:09:17.465 13:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:17.723 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:17.723 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:17.723 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:17.723 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:17.723 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:17.723 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:17.723 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:17.723 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:17.723 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:17.723 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:17.723 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:17.981 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:17.981 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:17.981 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:17.981 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:17.981 NVMe Readv/Writev Request test 00:09:17.981 Attached to 0000:00:09.0 00:09:17.981 Attached to 0000:00:06.0 00:09:17.981 Attached to 0000:00:07.0 00:09:17.981 Attached to 0000:00:08.0 00:09:17.981 0000:00:06.0: build_io_request_2 test passed 00:09:17.982 0000:00:06.0: build_io_request_4 test passed 00:09:17.982 0000:00:06.0: build_io_request_5 test passed 00:09:17.982 0000:00:06.0: build_io_request_6 test passed 00:09:17.982 0000:00:06.0: build_io_request_7 test passed 00:09:17.982 0000:00:06.0: build_io_request_10 test passed 00:09:17.982 0000:00:07.0: build_io_request_2 test passed 00:09:17.982 0000:00:07.0: build_io_request_4 test passed 00:09:17.982 0000:00:07.0: build_io_request_5 test passed 00:09:17.982 0000:00:07.0: build_io_request_6 test passed 00:09:17.982 0000:00:07.0: build_io_request_7 test passed 00:09:17.982 0000:00:07.0: build_io_request_10 test passed 00:09:17.982 Cleaning up... 00:09:17.982 ************************************ 00:09:17.982 END TEST nvme_sgl 00:09:17.982 ************************************ 00:09:17.982 00:09:17.982 real 0m0.360s 00:09:17.982 user 0m0.227s 00:09:17.982 sys 0m0.094s 00:09:17.982 13:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.982 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 13:13:32 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:17.982 13:13:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.982 13:13:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.982 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 ************************************ 00:09:17.982 START TEST nvme_e2edp 00:09:17.982 ************************************ 00:09:17.982 13:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:18.240 NVMe Write/Read with End-to-End data protection test 00:09:18.240 Attached to 0000:00:09.0 00:09:18.240 Attached to 0000:00:06.0 00:09:18.240 Attached to 0000:00:07.0 00:09:18.240 Attached to 0000:00:08.0 00:09:18.240 Cleaning up... 00:09:18.240 00:09:18.240 real 0m0.206s 00:09:18.240 user 0m0.051s 00:09:18.240 sys 0m0.105s 00:09:18.240 ************************************ 00:09:18.240 END TEST nvme_e2edp 00:09:18.240 ************************************ 00:09:18.240 13:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.240 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.240 13:13:32 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:18.240 13:13:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.240 13:13:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.240 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.240 ************************************ 00:09:18.240 START TEST nvme_reserve 00:09:18.240 ************************************ 00:09:18.240 13:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:18.498 ===================================================== 00:09:18.498 NVMe Controller at PCI bus 0, device 9, function 0 00:09:18.498 ===================================================== 00:09:18.498 Reservations: Not Supported 00:09:18.498 ===================================================== 00:09:18.498 NVMe Controller at PCI bus 0, device 6, function 0 00:09:18.498 ===================================================== 00:09:18.498 Reservations: Not Supported 00:09:18.498 ===================================================== 00:09:18.498 NVMe Controller at PCI bus 0, device 7, function 0 00:09:18.498 ===================================================== 00:09:18.498 Reservations: Not Supported 00:09:18.498 ===================================================== 00:09:18.498 NVMe Controller at PCI bus 0, device 8, function 0 00:09:18.498 ===================================================== 00:09:18.498 Reservations: Not Supported 00:09:18.498 Reservation test passed 00:09:18.498 00:09:18.498 real 0m0.194s 00:09:18.498 user 0m0.064s 00:09:18.498 sys 0m0.085s 00:09:18.498 ************************************ 00:09:18.498 END TEST nvme_reserve 00:09:18.498 ************************************ 00:09:18.498 13:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.498 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.498 13:13:32 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:18.498 13:13:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.498 13:13:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.498 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.498 ************************************ 00:09:18.498 START TEST nvme_err_injection 00:09:18.498 ************************************ 00:09:18.498 13:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:18.756 NVMe Error Injection test 00:09:18.756 Attached to 0000:00:09.0 00:09:18.756 Attached to 0000:00:06.0 00:09:18.756 Attached to 0000:00:07.0 00:09:18.756 Attached to 0000:00:08.0 00:09:18.756 0000:00:07.0: get features failed as expected 00:09:18.756 0000:00:08.0: get features failed as expected 00:09:18.756 0000:00:09.0: get features failed as expected 00:09:18.756 0000:00:06.0: get features failed as expected 00:09:18.756 0000:00:09.0: get features successfully as expected 00:09:18.756 0000:00:06.0: get features successfully as expected 00:09:18.756 0000:00:07.0: get features successfully as expected 00:09:18.756 0000:00:08.0: get features successfully as expected 00:09:18.756 0000:00:09.0: read failed as expected 00:09:18.756 0000:00:06.0: read failed as expected 00:09:18.756 0000:00:07.0: read failed as expected 00:09:18.756 0000:00:08.0: read failed as expected 00:09:18.756 0000:00:09.0: read successfully as expected 00:09:18.756 0000:00:06.0: read successfully as expected 00:09:18.756 0000:00:07.0: read successfully as expected 00:09:18.756 0000:00:08.0: read successfully as expected 00:09:18.756 Cleaning up... 00:09:18.756 00:09:18.756 real 0m0.245s 00:09:18.756 user 0m0.110s 00:09:18.756 sys 0m0.091s 00:09:18.756 13:13:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.756 ************************************ 00:09:18.756 END TEST nvme_err_injection 00:09:18.756 ************************************ 00:09:18.756 13:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.756 13:13:33 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:18.756 13:13:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:18.756 13:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.756 13:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.756 ************************************ 00:09:18.756 START TEST nvme_overhead 00:09:18.756 ************************************ 00:09:18.756 13:13:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:20.128 Initializing NVMe Controllers 00:09:20.128 Attached to 0000:00:09.0 00:09:20.128 Attached to 0000:00:06.0 00:09:20.128 Attached to 0000:00:07.0 00:09:20.128 Attached to 0000:00:08.0 00:09:20.128 Initialization complete. Launching workers. 00:09:20.128 submit (in ns) avg, min, max = 11185.5, 9946.2, 331288.5 00:09:20.128 complete (in ns) avg, min, max = 7577.4, 7219.2, 300270.8 00:09:20.128 00:09:20.128 Submit histogram 00:09:20.128 ================ 00:09:20.128 Range in us Cumulative Count 00:09:20.128 9.945 - 9.994: 0.0054% ( 1) 00:09:20.128 10.191 - 10.240: 0.0107% ( 1) 00:09:20.128 10.732 - 10.782: 0.0161% ( 1) 00:09:20.128 10.782 - 10.831: 0.3807% ( 68) 00:09:20.128 10.831 - 10.880: 4.1390% ( 701) 00:09:20.128 10.880 - 10.929: 17.8962% ( 2566) 00:09:20.128 10.929 - 10.978: 40.5908% ( 4233) 00:09:20.128 10.978 - 11.028: 64.2827% ( 4419) 00:09:20.128 11.028 - 11.077: 80.2059% ( 2970) 00:09:20.128 11.077 - 11.126: 87.6689% ( 1392) 00:09:20.128 11.126 - 11.175: 90.9232% ( 607) 00:09:20.128 11.175 - 11.225: 92.0974% ( 219) 00:09:20.128 11.225 - 11.274: 92.6817% ( 109) 00:09:20.128 11.274 - 11.323: 92.9552% ( 51) 00:09:20.128 11.323 - 11.372: 93.2769% ( 60) 00:09:20.128 11.372 - 11.422: 93.4860% ( 39) 00:09:20.128 11.422 - 11.471: 93.7540% ( 50) 00:09:20.128 11.471 - 11.520: 93.9256% ( 32) 00:09:20.128 11.520 - 11.569: 94.1079% ( 34) 00:09:20.128 11.569 - 11.618: 94.2741% ( 31) 00:09:20.128 11.618 - 11.668: 94.4242% ( 28) 00:09:20.128 11.668 - 11.717: 94.5368% ( 21) 00:09:20.128 11.717 - 11.766: 94.5850% ( 9) 00:09:20.128 11.766 - 11.815: 94.6762% ( 17) 00:09:20.128 11.815 - 11.865: 94.7727% ( 18) 00:09:20.128 11.865 - 11.914: 94.9710% ( 37) 00:09:20.128 11.914 - 11.963: 95.3410% ( 69) 00:09:20.128 11.963 - 12.012: 95.7645% ( 79) 00:09:20.128 12.012 - 12.062: 96.2524% ( 91) 00:09:20.128 12.062 - 12.111: 96.5955% ( 64) 00:09:20.128 12.111 - 12.160: 96.9279% ( 62) 00:09:20.128 12.160 - 12.209: 97.1478% ( 41) 00:09:20.128 12.209 - 12.258: 97.2711% ( 23) 00:09:20.128 12.258 - 12.308: 97.3408% ( 13) 00:09:20.128 12.308 - 12.357: 97.4105% ( 13) 00:09:20.128 12.357 - 12.406: 97.4373% ( 5) 00:09:20.128 12.406 - 12.455: 97.4426% ( 1) 00:09:20.128 12.455 - 12.505: 97.4480% ( 1) 00:09:20.128 12.505 - 12.554: 97.4534% ( 1) 00:09:20.128 12.554 - 12.603: 97.4694% ( 3) 00:09:20.128 12.603 - 12.702: 97.4802% ( 2) 00:09:20.128 12.702 - 12.800: 97.5016% ( 4) 00:09:20.128 12.800 - 12.898: 97.6142% ( 21) 00:09:20.128 12.898 - 12.997: 97.6732% ( 11) 00:09:20.128 12.997 - 13.095: 97.7536% ( 15) 00:09:20.128 13.095 - 13.194: 97.8769% ( 23) 00:09:20.128 13.194 - 13.292: 98.0002% ( 23) 00:09:20.128 13.292 - 13.391: 98.1021% ( 19) 00:09:20.128 13.391 - 13.489: 98.1664% ( 12) 00:09:20.128 13.489 - 13.588: 98.2415% ( 14) 00:09:20.128 13.588 - 13.686: 98.2790% ( 7) 00:09:20.128 13.686 - 13.785: 98.3005% ( 4) 00:09:20.128 13.785 - 13.883: 98.3112% ( 2) 00:09:20.128 13.883 - 13.982: 98.3273% ( 3) 00:09:20.128 13.982 - 14.080: 98.3380% ( 2) 00:09:20.128 14.080 - 14.178: 98.3487% ( 2) 00:09:20.128 14.178 - 14.277: 98.3701% ( 4) 00:09:20.128 14.277 - 14.375: 98.3970% ( 5) 00:09:20.128 14.375 - 14.474: 98.4130% ( 3) 00:09:20.128 14.474 - 14.572: 98.4291% ( 3) 00:09:20.128 14.572 - 14.671: 98.4667% ( 7) 00:09:20.128 14.671 - 14.769: 98.4827% ( 3) 00:09:20.128 14.769 - 14.868: 98.5149% ( 6) 00:09:20.128 14.868 - 14.966: 98.5310% ( 3) 00:09:20.128 14.966 - 15.065: 98.5685% ( 7) 00:09:20.128 15.065 - 15.163: 98.5846% ( 3) 00:09:20.128 15.163 - 15.262: 98.6168% ( 6) 00:09:20.128 15.262 - 15.360: 98.6543% ( 7) 00:09:20.128 15.360 - 15.458: 98.6597% ( 1) 00:09:20.128 15.557 - 15.655: 98.6704% ( 2) 00:09:20.128 15.655 - 15.754: 98.6865% ( 3) 00:09:20.128 15.754 - 15.852: 98.6972% ( 2) 00:09:20.128 15.852 - 15.951: 98.7133% ( 3) 00:09:20.128 16.049 - 16.148: 98.7294% ( 3) 00:09:20.128 16.148 - 16.246: 98.7347% ( 1) 00:09:20.128 16.246 - 16.345: 98.7454% ( 2) 00:09:20.128 16.345 - 16.443: 98.7562% ( 2) 00:09:20.128 16.443 - 16.542: 98.8044% ( 9) 00:09:20.128 16.542 - 16.640: 98.8634% ( 11) 00:09:20.128 16.640 - 16.738: 98.9385% ( 14) 00:09:20.128 16.738 - 16.837: 99.0189% ( 15) 00:09:20.128 16.837 - 16.935: 99.0778% ( 11) 00:09:20.128 16.935 - 17.034: 99.1100% ( 6) 00:09:20.128 17.034 - 17.132: 99.1851% ( 14) 00:09:20.128 17.132 - 17.231: 99.2709% ( 16) 00:09:20.128 17.231 - 17.329: 99.3352% ( 12) 00:09:20.128 17.329 - 17.428: 99.4103% ( 14) 00:09:20.128 17.428 - 17.526: 99.4799% ( 13) 00:09:20.128 17.526 - 17.625: 99.5175% ( 7) 00:09:20.128 17.625 - 17.723: 99.5818% ( 12) 00:09:20.128 17.723 - 17.822: 99.6247% ( 8) 00:09:20.128 17.822 - 17.920: 99.6408% ( 3) 00:09:20.128 17.920 - 18.018: 99.6622% ( 4) 00:09:20.128 18.018 - 18.117: 99.6944% ( 6) 00:09:20.128 18.117 - 18.215: 99.7212% ( 5) 00:09:20.128 18.215 - 18.314: 99.7373% ( 3) 00:09:20.128 18.314 - 18.412: 99.7587% ( 4) 00:09:20.128 18.412 - 18.511: 99.7695% ( 2) 00:09:20.128 18.609 - 18.708: 99.7855% ( 3) 00:09:20.128 18.708 - 18.806: 99.7963% ( 2) 00:09:20.128 18.806 - 18.905: 99.8016% ( 1) 00:09:20.128 18.905 - 19.003: 99.8231% ( 4) 00:09:20.128 19.003 - 19.102: 99.8392% ( 3) 00:09:20.128 19.200 - 19.298: 99.8552% ( 3) 00:09:20.128 19.791 - 19.889: 99.8606% ( 1) 00:09:20.128 19.889 - 19.988: 99.8660% ( 1) 00:09:20.128 20.185 - 20.283: 99.8713% ( 1) 00:09:20.128 20.382 - 20.480: 99.8767% ( 1) 00:09:20.128 20.677 - 20.775: 99.8821% ( 1) 00:09:20.128 20.775 - 20.874: 99.8874% ( 1) 00:09:20.128 20.972 - 21.071: 99.8928% ( 1) 00:09:20.128 21.071 - 21.169: 99.8981% ( 1) 00:09:20.128 21.465 - 21.563: 99.9035% ( 1) 00:09:20.128 21.662 - 21.760: 99.9089% ( 1) 00:09:20.128 21.858 - 21.957: 99.9142% ( 1) 00:09:20.128 24.025 - 24.123: 99.9196% ( 1) 00:09:20.128 24.123 - 24.222: 99.9249% ( 1) 00:09:20.128 25.206 - 25.403: 99.9303% ( 1) 00:09:20.128 25.994 - 26.191: 99.9357% ( 1) 00:09:20.128 26.585 - 26.782: 99.9410% ( 1) 00:09:20.128 27.766 - 27.963: 99.9464% ( 1) 00:09:20.128 29.735 - 29.932: 99.9517% ( 1) 00:09:20.128 30.523 - 30.720: 99.9571% ( 1) 00:09:20.128 30.917 - 31.114: 99.9625% ( 1) 00:09:20.128 35.840 - 36.037: 99.9678% ( 1) 00:09:20.128 36.628 - 36.825: 99.9732% ( 1) 00:09:20.128 45.292 - 45.489: 99.9786% ( 1) 00:09:20.128 48.837 - 49.034: 99.9839% ( 1) 00:09:20.128 61.440 - 61.834: 99.9893% ( 1) 00:09:20.128 85.071 - 85.465: 99.9946% ( 1) 00:09:20.128 330.831 - 332.406: 100.0000% ( 1) 00:09:20.128 00:09:20.128 Complete histogram 00:09:20.128 ================== 00:09:20.128 Range in us Cumulative Count 00:09:20.128 7.188 - 7.237: 0.0161% ( 3) 00:09:20.128 7.237 - 7.286: 1.3886% ( 256) 00:09:20.128 7.286 - 7.335: 11.3446% ( 1857) 00:09:20.128 7.335 - 7.385: 35.2080% ( 4451) 00:09:20.128 7.385 - 7.434: 61.4733% ( 4899) 00:09:20.128 7.434 - 7.483: 80.1415% ( 3482) 00:09:20.128 7.483 - 7.532: 88.4516% ( 1550) 00:09:20.128 7.532 - 7.582: 92.4512% ( 746) 00:09:20.128 7.582 - 7.631: 94.3545% ( 355) 00:09:20.128 7.631 - 7.680: 95.3356% ( 183) 00:09:20.128 7.680 - 7.729: 95.9200% ( 109) 00:09:20.128 7.729 - 7.778: 96.2524% ( 62) 00:09:20.128 7.778 - 7.828: 96.3489% ( 18) 00:09:20.128 7.828 - 7.877: 96.4133% ( 12) 00:09:20.128 7.877 - 7.926: 96.4508% ( 7) 00:09:20.128 7.926 - 7.975: 96.4776% ( 5) 00:09:20.128 7.975 - 8.025: 96.5258% ( 9) 00:09:20.128 8.025 - 8.074: 96.6545% ( 24) 00:09:20.128 8.074 - 8.123: 96.8100% ( 29) 00:09:20.128 8.123 - 8.172: 97.0888% ( 52) 00:09:20.128 8.172 - 8.222: 97.4265% ( 63) 00:09:20.128 8.222 - 8.271: 97.7590% ( 62) 00:09:20.128 8.271 - 8.320: 97.9305% ( 32) 00:09:20.128 8.320 - 8.369: 98.0324% ( 19) 00:09:20.128 8.369 - 8.418: 98.0646% ( 6) 00:09:20.128 8.418 - 8.468: 98.0753% ( 2) 00:09:20.128 8.468 - 8.517: 98.0860% ( 2) 00:09:20.128 8.517 - 8.566: 98.1021% ( 3) 00:09:20.128 8.566 - 8.615: 98.1128% ( 2) 00:09:20.128 8.615 - 8.665: 98.1182% ( 1) 00:09:20.128 8.714 - 8.763: 98.1235% ( 1) 00:09:20.128 8.763 - 8.812: 98.1289% ( 1) 00:09:20.128 8.960 - 9.009: 98.1342% ( 1) 00:09:20.128 9.157 - 9.206: 98.1396% ( 1) 00:09:20.128 9.206 - 9.255: 98.1450% ( 1) 00:09:20.128 9.255 - 9.305: 98.1557% ( 2) 00:09:20.128 9.305 - 9.354: 98.1664% ( 2) 00:09:20.128 9.354 - 9.403: 98.1718% ( 1) 00:09:20.128 9.502 - 9.551: 98.1771% ( 1) 00:09:20.128 9.551 - 9.600: 98.1825% ( 1) 00:09:20.128 9.797 - 9.846: 98.1879% ( 1) 00:09:20.128 9.895 - 9.945: 98.1932% ( 1) 00:09:20.128 9.994 - 10.043: 98.1986% ( 1) 00:09:20.128 10.092 - 10.142: 98.2200% ( 4) 00:09:20.128 10.142 - 10.191: 98.2308% ( 2) 00:09:20.128 10.240 - 10.289: 98.2361% ( 1) 00:09:20.128 10.289 - 10.338: 98.2522% ( 3) 00:09:20.128 10.388 - 10.437: 98.2576% ( 1) 00:09:20.128 10.437 - 10.486: 98.2683% ( 2) 00:09:20.128 10.486 - 10.535: 98.2736% ( 1) 00:09:20.128 10.585 - 10.634: 98.2897% ( 3) 00:09:20.128 10.683 - 10.732: 98.3058% ( 3) 00:09:20.128 10.732 - 10.782: 98.3112% ( 1) 00:09:20.128 10.831 - 10.880: 98.3273% ( 3) 00:09:20.128 10.929 - 10.978: 98.3326% ( 1) 00:09:20.128 10.978 - 11.028: 98.3433% ( 2) 00:09:20.128 11.372 - 11.422: 98.3487% ( 1) 00:09:20.128 11.520 - 11.569: 98.3541% ( 1) 00:09:20.128 11.668 - 11.717: 98.3594% ( 1) 00:09:20.128 12.111 - 12.160: 98.3648% ( 1) 00:09:20.128 12.258 - 12.308: 98.3755% ( 2) 00:09:20.128 12.554 - 12.603: 98.3809% ( 1) 00:09:20.128 12.702 - 12.800: 98.3862% ( 1) 00:09:20.128 12.800 - 12.898: 98.4238% ( 7) 00:09:20.128 12.898 - 12.997: 98.4720% ( 9) 00:09:20.128 12.997 - 13.095: 98.5095% ( 7) 00:09:20.128 13.095 - 13.194: 98.5846% ( 14) 00:09:20.128 13.194 - 13.292: 98.6382% ( 10) 00:09:20.128 13.292 - 13.391: 98.7240% ( 16) 00:09:20.128 13.391 - 13.489: 98.7883% ( 12) 00:09:20.128 13.489 - 13.588: 98.8688% ( 15) 00:09:20.128 13.588 - 13.686: 99.0350% ( 31) 00:09:20.128 13.686 - 13.785: 99.1529% ( 22) 00:09:20.128 13.785 - 13.883: 99.2333% ( 15) 00:09:20.128 13.883 - 13.982: 99.3084% ( 14) 00:09:20.128 13.982 - 14.080: 99.4103% ( 19) 00:09:20.128 14.080 - 14.178: 99.5014% ( 17) 00:09:20.128 14.178 - 14.277: 99.5765% ( 14) 00:09:20.128 14.277 - 14.375: 99.6301% ( 10) 00:09:20.128 14.375 - 14.474: 99.6569% ( 5) 00:09:20.128 14.474 - 14.572: 99.6998% ( 8) 00:09:20.128 14.572 - 14.671: 99.7427% ( 8) 00:09:20.128 14.671 - 14.769: 99.7802% ( 7) 00:09:20.128 14.769 - 14.868: 99.7855% ( 1) 00:09:20.128 14.868 - 14.966: 99.8124% ( 5) 00:09:20.128 14.966 - 15.065: 99.8284% ( 3) 00:09:20.128 15.065 - 15.163: 99.8392% ( 2) 00:09:20.128 15.163 - 15.262: 99.8445% ( 1) 00:09:20.128 15.262 - 15.360: 99.8499% ( 1) 00:09:20.128 15.458 - 15.557: 99.8552% ( 1) 00:09:20.128 15.557 - 15.655: 99.8606% ( 1) 00:09:20.128 15.754 - 15.852: 99.8660% ( 1) 00:09:20.128 15.852 - 15.951: 99.8713% ( 1) 00:09:20.128 16.443 - 16.542: 99.8767% ( 1) 00:09:20.128 16.542 - 16.640: 99.8821% ( 1) 00:09:20.128 16.935 - 17.034: 99.8874% ( 1) 00:09:20.128 17.034 - 17.132: 99.8928% ( 1) 00:09:20.128 17.132 - 17.231: 99.9035% ( 2) 00:09:20.128 17.329 - 17.428: 99.9089% ( 1) 00:09:20.128 17.526 - 17.625: 99.9142% ( 1) 00:09:20.128 18.314 - 18.412: 99.9196% ( 1) 00:09:20.128 18.806 - 18.905: 99.9249% ( 1) 00:09:20.128 19.298 - 19.397: 99.9357% ( 2) 00:09:20.128 19.495 - 19.594: 99.9410% ( 1) 00:09:20.128 19.889 - 19.988: 99.9464% ( 1) 00:09:20.128 20.677 - 20.775: 99.9517% ( 1) 00:09:20.128 25.403 - 25.600: 99.9571% ( 1) 00:09:20.128 45.489 - 45.686: 99.9625% ( 1) 00:09:20.128 46.474 - 46.671: 99.9678% ( 1) 00:09:20.128 47.852 - 48.049: 99.9732% ( 1) 00:09:20.128 49.822 - 50.018: 99.9786% ( 1) 00:09:20.128 53.169 - 53.563: 99.9839% ( 1) 00:09:20.128 55.532 - 55.926: 99.9893% ( 1) 00:09:20.128 63.803 - 64.197: 99.9946% ( 1) 00:09:20.128 299.323 - 300.898: 100.0000% ( 1) 00:09:20.128 00:09:20.128 00:09:20.128 real 0m1.204s 00:09:20.128 user 0m1.066s 00:09:20.128 sys 0m0.095s 00:09:20.128 ************************************ 00:09:20.128 END TEST nvme_overhead 00:09:20.128 ************************************ 00:09:20.128 13:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.128 13:13:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.128 13:13:34 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:20.128 13:13:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:20.128 13:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.128 13:13:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.128 ************************************ 00:09:20.128 START TEST nvme_arbitration 00:09:20.128 ************************************ 00:09:20.128 13:13:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:23.433 Initializing NVMe Controllers 00:09:23.433 Attached to 0000:00:09.0 00:09:23.433 Attached to 0000:00:06.0 00:09:23.433 Attached to 0000:00:07.0 00:09:23.433 Attached to 0000:00:08.0 00:09:23.433 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:23.433 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:23.433 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:23.433 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:23.433 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:23.433 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:23.433 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:23.433 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:23.433 Initialization complete. Launching workers. 00:09:23.433 Starting thread on core 1 with urgent priority queue 00:09:23.433 Starting thread on core 2 with urgent priority queue 00:09:23.433 Starting thread on core 3 with urgent priority queue 00:09:23.433 Starting thread on core 0 with urgent priority queue 00:09:23.433 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:23.433 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:23.433 QEMU NVMe Ctrl (12340 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:23.433 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:23.433 QEMU NVMe Ctrl (12341 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:09:23.433 QEMU NVMe Ctrl (12342 ) core 3: 874.67 IO/s 114.33 secs/100000 ios 00:09:23.433 ======================================================== 00:09:23.433 00:09:23.433 00:09:23.433 real 0m3.384s 00:09:23.433 user 0m9.370s 00:09:23.433 sys 0m0.120s 00:09:23.433 13:13:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.433 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:09:23.433 ************************************ 00:09:23.433 END TEST nvme_arbitration 00:09:23.433 ************************************ 00:09:23.433 13:13:37 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:23.433 13:13:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:23.433 13:13:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.433 13:13:37 -- common/autotest_common.sh@10 -- # set +x 00:09:23.433 ************************************ 00:09:23.433 START TEST nvme_single_aen 00:09:23.433 ************************************ 00:09:23.433 13:13:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:23.433 [2024-12-16 13:13:37.863782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.433 [2024-12-16 13:13:37.864059] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.695 [2024-12-16 13:13:38.007263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:23.695 [2024-12-16 13:13:38.009567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:23.695 [2024-12-16 13:13:38.011623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:23.695 [2024-12-16 13:13:38.013128] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:23.695 Asynchronous Event Request test 00:09:23.695 Attached to 0000:00:09.0 00:09:23.695 Attached to 0000:00:06.0 00:09:23.695 Attached to 0000:00:07.0 00:09:23.695 Attached to 0000:00:08.0 00:09:23.695 Reset controller to setup AER completions for this process 00:09:23.695 Registering asynchronous event callbacks... 00:09:23.695 Getting orig temperature thresholds of all controllers 00:09:23.695 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.695 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.695 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.695 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.695 Setting all controllers temperature threshold low to trigger AER 00:09:23.695 Waiting for all controllers temperature threshold to be set lower 00:09:23.695 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.695 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:23.695 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.695 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:23.695 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.695 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:23.695 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.695 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:23.695 Waiting for all controllers to trigger AER and reset threshold 00:09:23.695 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.695 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.695 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.695 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.695 Cleaning up... 00:09:23.695 00:09:23.695 real 0m0.212s 00:09:23.695 user 0m0.066s 00:09:23.695 sys 0m0.094s 00:09:23.695 13:13:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.695 13:13:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.695 ************************************ 00:09:23.695 END TEST nvme_single_aen 00:09:23.695 ************************************ 00:09:23.695 13:13:38 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:23.695 13:13:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.695 13:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.695 13:13:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.695 ************************************ 00:09:23.695 START TEST nvme_doorbell_aers 00:09:23.695 ************************************ 00:09:23.695 13:13:38 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:23.695 13:13:38 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:23.695 13:13:38 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:23.695 13:13:38 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:23.695 13:13:38 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:23.695 13:13:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:23.695 13:13:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:23.695 13:13:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.695 13:13:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:23.695 13:13:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:23.695 13:13:38 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:23.695 13:13:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:23.695 13:13:38 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:23.695 13:13:38 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:23.956 [2024-12-16 13:13:38.349601] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:09:33.937 Executing: test_write_invalid_db 00:09:33.937 Waiting for AER completion... 00:09:33.937 Failure: test_write_invalid_db 00:09:33.937 00:09:33.937 Executing: test_invalid_db_write_overflow_sq 00:09:33.937 Waiting for AER completion... 00:09:33.937 Failure: test_invalid_db_write_overflow_sq 00:09:33.937 00:09:33.937 Executing: test_invalid_db_write_overflow_cq 00:09:33.937 Waiting for AER completion... 00:09:33.937 Failure: test_invalid_db_write_overflow_cq 00:09:33.937 00:09:33.937 13:13:48 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:33.937 13:13:48 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:33.937 [2024-12-16 13:13:48.386485] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:09:43.921 Executing: test_write_invalid_db 00:09:43.921 Waiting for AER completion... 00:09:43.921 Failure: test_write_invalid_db 00:09:43.921 00:09:43.921 Executing: test_invalid_db_write_overflow_sq 00:09:43.921 Waiting for AER completion... 00:09:43.921 Failure: test_invalid_db_write_overflow_sq 00:09:43.921 00:09:43.921 Executing: test_invalid_db_write_overflow_cq 00:09:43.921 Waiting for AER completion... 00:09:43.921 Failure: test_invalid_db_write_overflow_cq 00:09:43.921 00:09:43.921 13:13:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:43.921 13:13:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:43.921 [2024-12-16 13:13:58.410027] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:09:53.888 Executing: test_write_invalid_db 00:09:53.888 Waiting for AER completion... 00:09:53.888 Failure: test_write_invalid_db 00:09:53.888 00:09:53.888 Executing: test_invalid_db_write_overflow_sq 00:09:53.888 Waiting for AER completion... 00:09:53.888 Failure: test_invalid_db_write_overflow_sq 00:09:53.888 00:09:53.888 Executing: test_invalid_db_write_overflow_cq 00:09:53.888 Waiting for AER completion... 00:09:53.888 Failure: test_invalid_db_write_overflow_cq 00:09:53.888 00:09:53.888 13:14:08 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:53.888 13:14:08 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:54.147 [2024-12-16 13:14:08.465044] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 Executing: test_write_invalid_db 00:10:04.114 Waiting for AER completion... 00:10:04.114 Failure: test_write_invalid_db 00:10:04.114 00:10:04.114 Executing: test_invalid_db_write_overflow_sq 00:10:04.114 Waiting for AER completion... 00:10:04.114 Failure: test_invalid_db_write_overflow_sq 00:10:04.114 00:10:04.114 Executing: test_invalid_db_write_overflow_cq 00:10:04.114 Waiting for AER completion... 00:10:04.114 Failure: test_invalid_db_write_overflow_cq 00:10:04.114 00:10:04.114 00:10:04.114 real 0m40.187s 00:10:04.114 user 0m34.137s 00:10:04.114 sys 0m5.658s 00:10:04.114 13:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.114 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 ************************************ 00:10:04.114 END TEST nvme_doorbell_aers 00:10:04.114 ************************************ 00:10:04.114 13:14:18 -- nvme/nvme.sh@97 -- # uname 00:10:04.114 13:14:18 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:04.114 13:14:18 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:04.114 13:14:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:04.114 13:14:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.114 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 ************************************ 00:10:04.114 START TEST nvme_multi_aen 00:10:04.114 ************************************ 00:10:04.114 13:14:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:04.114 [2024-12-16 13:14:18.357937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.114 [2024-12-16 13:14:18.358110] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.114 [2024-12-16 13:14:18.492397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:04.114 [2024-12-16 13:14:18.492526] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.492604] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.492648] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.493805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:04.114 [2024-12-16 13:14:18.493895] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.493962] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.493992] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.494854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:04.114 [2024-12-16 13:14:18.494920] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.494984] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.495011] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.495917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:04.114 [2024-12-16 13:14:18.495981] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.496046] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.496074] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63762) is not found. Dropping the request. 00:10:04.114 [2024-12-16 13:14:18.505251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.114 [2024-12-16 13:14:18.505732] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 64288 00:10:04.114 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.373 [Child] Asynchronous Event Request test 00:10:04.373 [Child] Attached to 0000:00:09.0 00:10:04.373 [Child] Attached to 0000:00:06.0 00:10:04.373 [Child] Attached to 0000:00:07.0 00:10:04.373 [Child] Attached to 0000:00:08.0 00:10:04.373 [Child] Registering asynchronous event callbacks... 00:10:04.373 [Child] Getting orig temperature thresholds of all controllers 00:10:04.373 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:04.373 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.373 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.373 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.373 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.373 [Child] Cleaning up... 00:10:04.373 Asynchronous Event Request test 00:10:04.373 Attached to 0000:00:09.0 00:10:04.373 Attached to 0000:00:06.0 00:10:04.373 Attached to 0000:00:07.0 00:10:04.373 Attached to 0000:00:08.0 00:10:04.373 Reset controller to setup AER completions for this process 00:10:04.373 Registering asynchronous event callbacks... 00:10:04.373 Getting orig temperature thresholds of all controllers 00:10:04.373 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.373 Setting all controllers temperature threshold low to trigger AER 00:10:04.373 Waiting for all controllers temperature threshold to be set lower 00:10:04.373 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:04.373 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:04.373 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.373 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:04.374 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.374 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:04.374 Waiting for all controllers to trigger AER and reset threshold 00:10:04.374 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.374 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.374 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.374 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.374 Cleaning up... 00:10:04.374 00:10:04.374 real 0m0.416s 00:10:04.374 user 0m0.125s 00:10:04.374 sys 0m0.179s 00:10:04.374 ************************************ 00:10:04.374 END TEST nvme_multi_aen 00:10:04.374 ************************************ 00:10:04.374 13:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.374 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.374 13:14:18 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:04.374 13:14:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:04.374 13:14:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.374 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.374 ************************************ 00:10:04.374 START TEST nvme_startup 00:10:04.374 ************************************ 00:10:04.374 13:14:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:04.632 Initializing NVMe Controllers 00:10:04.632 Attached to 0000:00:09.0 00:10:04.632 Attached to 0000:00:06.0 00:10:04.632 Attached to 0000:00:07.0 00:10:04.632 Attached to 0000:00:08.0 00:10:04.632 Initialization complete. 00:10:04.632 Time used:144235.547 (us). 00:10:04.632 ************************************ 00:10:04.632 END TEST nvme_startup 00:10:04.632 ************************************ 00:10:04.632 00:10:04.632 real 0m0.198s 00:10:04.632 user 0m0.054s 00:10:04.632 sys 0m0.091s 00:10:04.632 13:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.632 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.632 13:14:19 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:04.632 13:14:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:04.632 13:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.632 13:14:19 -- common/autotest_common.sh@10 -- # set +x 00:10:04.632 ************************************ 00:10:04.632 START TEST nvme_multi_secondary 00:10:04.632 ************************************ 00:10:04.632 13:14:19 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:04.632 13:14:19 -- nvme/nvme.sh@52 -- # pid0=64344 00:10:04.632 13:14:19 -- nvme/nvme.sh@54 -- # pid1=64345 00:10:04.632 13:14:19 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:04.632 13:14:19 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:04.632 13:14:19 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:07.931 Initializing NVMe Controllers 00:10:07.931 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:07.931 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:07.931 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:07.931 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:07.931 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:07.931 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:07.931 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:07.931 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:07.931 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:07.931 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:07.931 Initialization complete. Launching workers. 00:10:07.931 ======================================================== 00:10:07.931 Latency(us) 00:10:07.931 Device Information : IOPS MiB/s Average min max 00:10:07.931 PCIE (0000:00:09.0) NSID 1 from core 1: 8035.27 31.39 1990.82 725.89 6089.32 00:10:07.931 PCIE (0000:00:06.0) NSID 1 from core 1: 8035.27 31.39 1989.91 710.65 6074.96 00:10:07.931 PCIE (0000:00:07.0) NSID 1 from core 1: 8035.27 31.39 1990.80 722.40 6179.14 00:10:07.931 PCIE (0000:00:08.0) NSID 1 from core 1: 8035.27 31.39 1990.78 732.17 5849.23 00:10:07.931 PCIE (0000:00:08.0) NSID 2 from core 1: 8035.27 31.39 1990.87 732.99 6555.18 00:10:07.931 PCIE (0000:00:08.0) NSID 3 from core 1: 8035.27 31.39 1990.93 736.15 6644.73 00:10:07.931 ======================================================== 00:10:07.931 Total : 48211.59 188.33 1990.68 710.65 6644.73 00:10:07.931 00:10:07.931 Initializing NVMe Controllers 00:10:07.931 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:07.931 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:07.931 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:07.931 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:07.931 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:07.931 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:07.931 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:07.931 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:07.931 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:07.931 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:07.931 Initialization complete. Launching workers. 00:10:07.931 ======================================================== 00:10:07.931 Latency(us) 00:10:07.931 Device Information : IOPS MiB/s Average min max 00:10:07.931 PCIE (0000:00:09.0) NSID 1 from core 2: 3187.19 12.45 5012.52 876.42 17044.66 00:10:07.931 PCIE (0000:00:06.0) NSID 1 from core 2: 3187.19 12.45 5011.32 880.68 14003.11 00:10:07.931 PCIE (0000:00:07.0) NSID 1 from core 2: 3187.19 12.45 5012.54 865.97 13649.62 00:10:07.931 PCIE (0000:00:08.0) NSID 1 from core 2: 3187.19 12.45 5012.52 901.52 14040.05 00:10:07.931 PCIE (0000:00:08.0) NSID 2 from core 2: 3187.19 12.45 5012.55 1049.82 14336.27 00:10:07.931 PCIE (0000:00:08.0) NSID 3 from core 2: 3187.19 12.45 5011.83 979.07 13680.69 00:10:07.931 ======================================================== 00:10:07.931 Total : 19123.14 74.70 5012.21 865.97 17044.66 00:10:07.931 00:10:07.931 13:14:22 -- nvme/nvme.sh@56 -- # wait 64344 00:10:09.980 Initializing NVMe Controllers 00:10:09.980 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:09.980 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:09.980 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:09.980 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:09.980 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:09.980 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:09.980 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:09.980 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:09.980 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:09.980 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:09.980 Initialization complete. Launching workers. 00:10:09.980 ======================================================== 00:10:09.980 Latency(us) 00:10:09.980 Device Information : IOPS MiB/s Average min max 00:10:09.980 PCIE (0000:00:09.0) NSID 1 from core 0: 10997.01 42.96 1454.57 710.73 6793.55 00:10:09.980 PCIE (0000:00:06.0) NSID 1 from core 0: 10997.01 42.96 1453.70 692.05 6898.18 00:10:09.980 PCIE (0000:00:07.0) NSID 1 from core 0: 10997.01 42.96 1454.53 641.29 6832.46 00:10:09.980 PCIE (0000:00:08.0) NSID 1 from core 0: 10997.01 42.96 1454.52 623.10 6648.73 00:10:09.980 PCIE (0000:00:08.0) NSID 2 from core 0: 10997.01 42.96 1454.51 609.70 6850.00 00:10:09.980 PCIE (0000:00:08.0) NSID 3 from core 0: 10997.01 42.96 1454.49 587.72 6830.60 00:10:09.980 ======================================================== 00:10:09.980 Total : 65982.06 257.74 1454.39 587.72 6898.18 00:10:09.980 00:10:09.980 13:14:24 -- nvme/nvme.sh@57 -- # wait 64345 00:10:09.980 13:14:24 -- nvme/nvme.sh@61 -- # pid0=64414 00:10:09.980 13:14:24 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:09.980 13:14:24 -- nvme/nvme.sh@63 -- # pid1=64415 00:10:09.980 13:14:24 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:09.980 13:14:24 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:13.262 Initializing NVMe Controllers 00:10:13.262 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:13.262 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:13.262 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:13.262 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:13.262 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:13.262 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:13.262 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:13.262 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:13.262 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:13.262 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:13.262 Initialization complete. Launching workers. 00:10:13.262 ======================================================== 00:10:13.262 Latency(us) 00:10:13.262 Device Information : IOPS MiB/s Average min max 00:10:13.262 PCIE (0000:00:09.0) NSID 1 from core 0: 7824.20 30.56 2044.61 719.26 6003.02 00:10:13.262 PCIE (0000:00:06.0) NSID 1 from core 0: 7824.20 30.56 2043.90 693.75 5985.53 00:10:13.262 PCIE (0000:00:07.0) NSID 1 from core 0: 7824.20 30.56 2044.75 713.64 5885.55 00:10:13.262 PCIE (0000:00:08.0) NSID 1 from core 0: 7824.20 30.56 2044.72 720.43 6046.24 00:10:13.262 PCIE (0000:00:08.0) NSID 2 from core 0: 7824.20 30.56 2044.76 717.70 6130.50 00:10:13.262 PCIE (0000:00:08.0) NSID 3 from core 0: 7824.20 30.56 2044.73 718.94 6156.35 00:10:13.262 ======================================================== 00:10:13.262 Total : 46945.22 183.38 2044.58 693.75 6156.35 00:10:13.262 00:10:13.262 Initializing NVMe Controllers 00:10:13.262 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:13.262 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:13.262 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:13.262 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:13.262 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:13.262 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:13.262 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:13.262 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:13.262 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:13.262 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:13.262 Initialization complete. Launching workers. 00:10:13.262 ======================================================== 00:10:13.262 Latency(us) 00:10:13.262 Device Information : IOPS MiB/s Average min max 00:10:13.262 PCIE (0000:00:09.0) NSID 1 from core 1: 7859.31 30.70 2035.38 724.46 5725.12 00:10:13.262 PCIE (0000:00:06.0) NSID 1 from core 1: 7859.31 30.70 2034.49 703.44 5776.13 00:10:13.262 PCIE (0000:00:07.0) NSID 1 from core 1: 7859.31 30.70 2035.34 725.04 5966.41 00:10:13.262 PCIE (0000:00:08.0) NSID 1 from core 1: 7859.31 30.70 2035.38 729.11 6299.05 00:10:13.262 PCIE (0000:00:08.0) NSID 2 from core 1: 7859.31 30.70 2035.34 719.94 6048.26 00:10:13.262 PCIE (0000:00:08.0) NSID 3 from core 1: 7859.31 30.70 2035.31 734.20 5866.54 00:10:13.262 ======================================================== 00:10:13.262 Total : 47155.88 184.20 2035.21 703.44 6299.05 00:10:13.262 00:10:15.793 Initializing NVMe Controllers 00:10:15.793 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:15.793 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:15.793 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:15.793 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:15.793 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:15.793 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:15.793 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:15.793 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:15.793 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:15.793 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:15.793 Initialization complete. Launching workers. 00:10:15.793 ======================================================== 00:10:15.793 Latency(us) 00:10:15.793 Device Information : IOPS MiB/s Average min max 00:10:15.793 PCIE (0000:00:09.0) NSID 1 from core 2: 4846.92 18.93 3300.05 740.93 12589.03 00:10:15.793 PCIE (0000:00:06.0) NSID 1 from core 2: 4846.92 18.93 3299.08 723.88 12734.34 00:10:15.793 PCIE (0000:00:07.0) NSID 1 from core 2: 4846.92 18.93 3300.45 705.49 12482.77 00:10:15.793 PCIE (0000:00:08.0) NSID 1 from core 2: 4846.92 18.93 3300.23 742.38 13134.84 00:10:15.793 PCIE (0000:00:08.0) NSID 2 from core 2: 4846.92 18.93 3300.33 747.68 13099.14 00:10:15.793 PCIE (0000:00:08.0) NSID 3 from core 2: 4846.92 18.93 3300.29 744.10 12701.25 00:10:15.793 ======================================================== 00:10:15.793 Total : 29081.50 113.60 3300.07 705.49 13134.84 00:10:15.793 00:10:15.793 ************************************ 00:10:15.793 END TEST nvme_multi_secondary 00:10:15.793 ************************************ 00:10:15.793 13:14:30 -- nvme/nvme.sh@65 -- # wait 64414 00:10:15.793 13:14:30 -- nvme/nvme.sh@66 -- # wait 64415 00:10:15.793 00:10:15.793 real 0m11.028s 00:10:15.793 user 0m18.641s 00:10:15.793 sys 0m0.619s 00:10:15.793 13:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.793 13:14:30 -- common/autotest_common.sh@10 -- # set +x 00:10:15.793 13:14:30 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:15.793 13:14:30 -- nvme/nvme.sh@102 -- # kill_stub 00:10:15.793 13:14:30 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63345 ]] 00:10:15.793 13:14:30 -- common/autotest_common.sh@1076 -- # kill 63345 00:10:15.793 13:14:30 -- common/autotest_common.sh@1077 -- # wait 63345 00:10:15.793 [2024-12-16 13:14:30.191117] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:15.793 [2024-12-16 13:14:30.191171] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:15.793 [2024-12-16 13:14:30.191183] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:15.793 [2024-12-16 13:14:30.191194] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.364 [2024-12-16 13:14:30.711315] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.364 [2024-12-16 13:14:30.711389] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.364 [2024-12-16 13:14:30.711407] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.364 [2024-12-16 13:14:30.711423] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.937 [2024-12-16 13:14:31.209572] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.937 [2024-12-16 13:14:31.210001] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.937 [2024-12-16 13:14:31.210027] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:16.937 [2024-12-16 13:14:31.210040] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:18.323 [2024-12-16 13:14:32.719426] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:18.323 [2024-12-16 13:14:32.719707] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:18.323 [2024-12-16 13:14:32.719806] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:18.323 [2024-12-16 13:14:32.719858] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64287) is not found. Dropping the request. 00:10:18.323 13:14:32 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:18.584 13:14:32 -- common/autotest_common.sh@1083 -- # echo 2 00:10:18.584 13:14:32 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:18.584 13:14:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:18.584 13:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.584 13:14:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.584 ************************************ 00:10:18.584 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:18.584 ************************************ 00:10:18.584 13:14:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:18.584 * Looking for test storage... 00:10:18.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:18.584 13:14:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.584 13:14:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.584 13:14:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.584 13:14:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.584 13:14:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.584 13:14:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.584 13:14:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.584 13:14:33 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.584 13:14:33 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.584 13:14:33 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.584 13:14:33 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.584 13:14:33 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.584 13:14:33 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.584 13:14:33 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.584 13:14:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.584 13:14:33 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.584 13:14:33 -- scripts/common.sh@344 -- # : 1 00:10:18.584 13:14:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.584 13:14:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.584 13:14:33 -- scripts/common.sh@364 -- # decimal 1 00:10:18.584 13:14:33 -- scripts/common.sh@352 -- # local d=1 00:10:18.584 13:14:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.584 13:14:33 -- scripts/common.sh@354 -- # echo 1 00:10:18.584 13:14:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.584 13:14:33 -- scripts/common.sh@365 -- # decimal 2 00:10:18.584 13:14:33 -- scripts/common.sh@352 -- # local d=2 00:10:18.584 13:14:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.584 13:14:33 -- scripts/common.sh@354 -- # echo 2 00:10:18.584 13:14:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.584 13:14:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.584 13:14:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.584 13:14:33 -- scripts/common.sh@367 -- # return 0 00:10:18.584 13:14:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.585 13:14:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.585 --rc genhtml_branch_coverage=1 00:10:18.585 --rc genhtml_function_coverage=1 00:10:18.585 --rc genhtml_legend=1 00:10:18.585 --rc geninfo_all_blocks=1 00:10:18.585 --rc geninfo_unexecuted_blocks=1 00:10:18.585 00:10:18.585 ' 00:10:18.585 13:14:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.585 --rc genhtml_branch_coverage=1 00:10:18.585 --rc genhtml_function_coverage=1 00:10:18.585 --rc genhtml_legend=1 00:10:18.585 --rc geninfo_all_blocks=1 00:10:18.585 --rc geninfo_unexecuted_blocks=1 00:10:18.585 00:10:18.585 ' 00:10:18.585 13:14:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.585 --rc genhtml_branch_coverage=1 00:10:18.585 --rc genhtml_function_coverage=1 00:10:18.585 --rc genhtml_legend=1 00:10:18.585 --rc geninfo_all_blocks=1 00:10:18.585 --rc geninfo_unexecuted_blocks=1 00:10:18.585 00:10:18.585 ' 00:10:18.585 13:14:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.585 --rc genhtml_branch_coverage=1 00:10:18.585 --rc genhtml_function_coverage=1 00:10:18.585 --rc genhtml_legend=1 00:10:18.585 --rc geninfo_all_blocks=1 00:10:18.585 --rc geninfo_unexecuted_blocks=1 00:10:18.585 00:10:18.585 ' 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:18.585 13:14:33 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:18.585 13:14:33 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:18.585 13:14:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:18.585 13:14:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:18.585 13:14:33 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:18.585 13:14:33 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:18.585 13:14:33 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:18.585 13:14:33 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:18.585 13:14:33 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:18.585 13:14:33 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:18.585 13:14:33 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:18.585 13:14:33 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:18.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64606 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64606 00:10:18.585 13:14:33 -- common/autotest_common.sh@829 -- # '[' -z 64606 ']' 00:10:18.585 13:14:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.585 13:14:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.585 13:14:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.585 13:14:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.585 13:14:33 -- common/autotest_common.sh@10 -- # set +x 00:10:18.585 13:14:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:18.846 [2024-12-16 13:14:33.188194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.846 [2024-12-16 13:14:33.188305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64606 ] 00:10:18.846 [2024-12-16 13:14:33.344973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.107 [2024-12-16 13:14:33.521135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.107 [2024-12-16 13:14:33.521694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.107 [2024-12-16 13:14:33.521865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.107 [2024-12-16 13:14:33.522154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.107 [2024-12-16 13:14:33.522261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.489 13:14:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.489 13:14:34 -- common/autotest_common.sh@862 -- # return 0 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:20.489 13:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.489 13:14:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.489 nvme0n1 00:10:20.489 13:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_uQqEL.txt 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:20.489 13:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.489 13:14:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.489 true 00:10:20.489 13:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734354874 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64631 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:20.489 13:14:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:22.388 13:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.388 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.388 [2024-12-16 13:14:36.778949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:22.388 [2024-12-16 13:14:36.779204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:22.388 [2024-12-16 13:14:36.779228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:22.388 [2024-12-16 13:14:36.779238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.388 [2024-12-16 13:14:36.780590] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:22.388 13:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64631 00:10:22.388 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64631 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64631 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:22.388 13:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.388 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.388 13:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_uQqEL.txt 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_uQqEL.txt 00:10:22.388 13:14:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64606 00:10:22.388 13:14:36 -- common/autotest_common.sh@936 -- # '[' -z 64606 ']' 00:10:22.388 13:14:36 -- common/autotest_common.sh@940 -- # kill -0 64606 00:10:22.388 13:14:36 -- common/autotest_common.sh@941 -- # uname 00:10:22.388 13:14:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.388 13:14:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64606 00:10:22.388 killing process with pid 64606 00:10:22.388 13:14:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:22.388 13:14:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:22.388 13:14:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64606' 00:10:22.388 13:14:36 -- common/autotest_common.sh@955 -- # kill 64606 00:10:22.388 13:14:36 -- common/autotest_common.sh@960 -- # wait 64606 00:10:23.763 13:14:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:23.763 13:14:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:23.763 00:10:23.763 real 0m5.164s 00:10:23.763 user 0m18.314s 00:10:23.763 sys 0m0.488s 00:10:23.763 13:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.763 ************************************ 00:10:23.763 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:23.763 ************************************ 00:10:23.763 13:14:38 -- common/autotest_common.sh@10 -- # set +x 00:10:23.763 13:14:38 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:23.764 13:14:38 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:23.764 13:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:23.764 13:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.764 13:14:38 -- common/autotest_common.sh@10 -- # set +x 00:10:23.764 ************************************ 00:10:23.764 START TEST nvme_fio 00:10:23.764 ************************************ 00:10:23.764 13:14:38 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:23.764 13:14:38 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:23.764 13:14:38 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:23.764 13:14:38 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:23.764 13:14:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:23.764 13:14:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:23.764 13:14:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:23.764 13:14:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:23.764 13:14:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:23.764 13:14:38 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:23.764 13:14:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:23.764 13:14:38 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:23.764 13:14:38 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:23.764 13:14:38 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:23.764 13:14:38 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:23.764 13:14:38 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:24.022 13:14:38 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:24.022 13:14:38 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:24.022 13:14:38 -- nvme/nvme.sh@41 -- # bs=4096 00:10:24.022 13:14:38 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:24.022 13:14:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:24.022 13:14:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:24.022 13:14:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:24.022 13:14:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:24.022 13:14:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.022 13:14:38 -- common/autotest_common.sh@1330 -- # shift 00:10:24.022 13:14:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:24.022 13:14:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:24.022 13:14:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.022 13:14:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:24.022 13:14:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:24.282 13:14:38 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:24.282 13:14:38 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:24.282 13:14:38 -- common/autotest_common.sh@1336 -- # break 00:10:24.282 13:14:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:24.282 13:14:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:24.282 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:24.282 fio-3.35 00:10:24.282 Starting 1 thread 00:10:29.594 00:10:29.594 test: (groupid=0, jobs=1): err= 0: pid=64766: Mon Dec 16 13:14:44 2024 00:10:29.594 read: IOPS=21.5k, BW=83.9MiB/s (88.0MB/s)(168MiB/2001msec) 00:10:29.594 slat (nsec): min=3233, max=72997, avg=5081.89, stdev=2534.34 00:10:29.594 clat (usec): min=231, max=8658, avg=2965.69, stdev=1086.45 00:10:29.594 lat (usec): min=235, max=8670, avg=2970.77, stdev=1087.69 00:10:29.594 clat percentiles (usec): 00:10:29.594 | 1.00th=[ 1876], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:29.594 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2671], 00:10:29.594 | 70.00th=[ 2868], 80.00th=[ 3294], 90.00th=[ 4752], 95.00th=[ 5604], 00:10:29.594 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 8160], 00:10:29.594 | 99.99th=[ 8356] 00:10:29.594 bw ( KiB/s): min=78736, max=84952, per=94.53%, avg=81194.67, stdev=3305.23, samples=3 00:10:29.594 iops : min=19684, max=21238, avg=20298.67, stdev=826.31, samples=3 00:10:29.594 write: IOPS=21.3k, BW=83.2MiB/s (87.3MB/s)(167MiB/2001msec); 0 zone resets 00:10:29.594 slat (nsec): min=3281, max=73277, avg=5236.50, stdev=2467.49 00:10:29.594 clat (usec): min=216, max=8727, avg=2993.95, stdev=1099.11 00:10:29.594 lat (usec): min=221, max=8738, avg=2999.19, stdev=1100.31 00:10:29.594 clat percentiles (usec): 00:10:29.594 | 1.00th=[ 1893], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:29.594 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2540], 60.00th=[ 2704], 00:10:29.594 | 70.00th=[ 2900], 80.00th=[ 3359], 90.00th=[ 4817], 95.00th=[ 5604], 00:10:29.594 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 8029], 99.95th=[ 8160], 00:10:29.594 | 99.99th=[ 8356] 00:10:29.594 bw ( KiB/s): min=78584, max=85152, per=95.45%, avg=81357.33, stdev=3401.03, samples=3 00:10:29.594 iops : min=19646, max=21288, avg=20339.33, stdev=850.26, samples=3 00:10:29.594 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.03% 00:10:29.594 lat (msec) : 2=1.52%, 4=83.81%, 10=14.60% 00:10:29.594 cpu : usr=99.00%, sys=0.10%, ctx=6, majf=0, minf=608 00:10:29.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:29.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.594 issued rwts: total=42968,42638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.594 00:10:29.594 Run status group 0 (all jobs): 00:10:29.594 READ: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:10:29.594 WRITE: bw=83.2MiB/s (87.3MB/s), 83.2MiB/s-83.2MiB/s (87.3MB/s-87.3MB/s), io=167MiB (175MB), run=2001-2001msec 00:10:29.852 ----------------------------------------------------- 00:10:29.852 Suppressions used: 00:10:29.852 count bytes template 00:10:29.852 1 32 /usr/src/fio/parse.c 00:10:29.852 1 8 libtcmalloc_minimal.so 00:10:29.852 ----------------------------------------------------- 00:10:29.852 00:10:29.852 13:14:44 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:29.852 13:14:44 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:29.852 13:14:44 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:29.852 13:14:44 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:30.112 13:14:44 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:30.112 13:14:44 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:30.112 13:14:44 -- nvme/nvme.sh@41 -- # bs=4096 00:10:30.112 13:14:44 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:30.112 13:14:44 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:30.112 13:14:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:30.112 13:14:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:30.112 13:14:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:30.112 13:14:44 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:30.112 13:14:44 -- common/autotest_common.sh@1330 -- # shift 00:10:30.112 13:14:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:30.112 13:14:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:30.112 13:14:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:30.112 13:14:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:30.112 13:14:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:30.112 13:14:44 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:30.112 13:14:44 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:30.112 13:14:44 -- common/autotest_common.sh@1336 -- # break 00:10:30.112 13:14:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:30.112 13:14:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:30.371 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:30.372 fio-3.35 00:10:30.372 Starting 1 thread 00:10:35.659 00:10:35.659 test: (groupid=0, jobs=1): err= 0: pid=64836: Mon Dec 16 13:14:50 2024 00:10:35.659 read: IOPS=19.5k, BW=76.1MiB/s (79.8MB/s)(152MiB/2001msec) 00:10:35.659 slat (nsec): min=3327, max=79584, avg=5254.10, stdev=2691.75 00:10:35.659 clat (usec): min=215, max=9528, avg=3263.91, stdev=1207.87 00:10:35.659 lat (usec): min=219, max=9541, avg=3269.17, stdev=1209.10 00:10:35.659 clat percentiles (usec): 00:10:35.659 | 1.00th=[ 1893], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:10:35.659 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2802], 60.00th=[ 2999], 00:10:35.659 | 70.00th=[ 3359], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 5866], 00:10:35.659 | 99.00th=[ 7111], 99.50th=[ 7635], 99.90th=[ 8586], 99.95th=[ 8848], 00:10:35.659 | 99.99th=[ 9110] 00:10:35.659 bw ( KiB/s): min=74856, max=82072, per=99.59%, avg=77645.33, stdev=3876.64, samples=3 00:10:35.659 iops : min=18714, max=20518, avg=19411.33, stdev=969.16, samples=3 00:10:35.659 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:10:35.659 slat (nsec): min=3404, max=76025, avg=5382.42, stdev=2756.76 00:10:35.659 clat (usec): min=267, max=9595, avg=3281.39, stdev=1192.24 00:10:35.659 lat (usec): min=271, max=9599, avg=3286.77, stdev=1193.46 00:10:35.659 clat percentiles (usec): 00:10:35.659 | 1.00th=[ 1926], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2442], 00:10:35.659 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3032], 00:10:35.659 | 70.00th=[ 3359], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 5866], 00:10:35.659 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8717], 99.95th=[ 8848], 00:10:35.659 | 99.99th=[ 9241] 00:10:35.659 bw ( KiB/s): min=75424, max=81744, per=100.00%, avg=77834.67, stdev=3416.15, samples=3 00:10:35.659 iops : min=18856, max=20436, avg=19458.67, stdev=854.04, samples=3 00:10:35.659 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.02% 00:10:35.659 lat (msec) : 2=1.55%, 4=77.61%, 10=20.79% 00:10:35.659 cpu : usr=99.05%, sys=0.00%, ctx=4, majf=0, minf=608 00:10:35.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:35.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.659 issued rwts: total=39002,38938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.659 00:10:35.659 Run status group 0 (all jobs): 00:10:35.659 READ: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=152MiB (160MB), run=2001-2001msec 00:10:35.659 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (159MB), run=2001-2001msec 00:10:35.919 ----------------------------------------------------- 00:10:35.919 Suppressions used: 00:10:35.919 count bytes template 00:10:35.919 1 32 /usr/src/fio/parse.c 00:10:35.919 1 8 libtcmalloc_minimal.so 00:10:35.919 ----------------------------------------------------- 00:10:35.919 00:10:35.919 13:14:50 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:35.919 13:14:50 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:35.919 13:14:50 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:35.919 13:14:50 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:36.180 13:14:50 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:36.180 13:14:50 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:36.441 13:14:50 -- nvme/nvme.sh@41 -- # bs=4096 00:10:36.441 13:14:50 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:36.441 13:14:50 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:36.442 13:14:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:36.442 13:14:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:36.442 13:14:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:36.442 13:14:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:36.442 13:14:50 -- common/autotest_common.sh@1330 -- # shift 00:10:36.442 13:14:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:36.442 13:14:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:36.442 13:14:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:36.442 13:14:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:36.442 13:14:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:36.442 13:14:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:36.442 13:14:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:36.442 13:14:50 -- common/autotest_common.sh@1336 -- # break 00:10:36.442 13:14:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:36.442 13:14:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:36.702 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:36.702 fio-3.35 00:10:36.702 Starting 1 thread 00:10:41.991 00:10:41.991 test: (groupid=0, jobs=1): err= 0: pid=64918: Mon Dec 16 13:14:55 2024 00:10:41.991 read: IOPS=14.6k, BW=57.1MiB/s (59.8MB/s)(114MiB/2001msec) 00:10:41.991 slat (nsec): min=4589, max=78143, avg=6744.87, stdev=3874.85 00:10:41.991 clat (usec): min=708, max=11331, avg=4333.95, stdev=1577.34 00:10:41.991 lat (usec): min=713, max=11386, avg=4340.69, stdev=1578.83 00:10:41.991 clat percentiles (usec): 00:10:41.991 | 1.00th=[ 2376], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2966], 00:10:41.991 | 30.00th=[ 3130], 40.00th=[ 3359], 50.00th=[ 3752], 60.00th=[ 4490], 00:10:41.991 | 70.00th=[ 5211], 80.00th=[ 5866], 90.00th=[ 6587], 95.00th=[ 7308], 00:10:41.991 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[ 9634], 99.95th=[10159], 00:10:41.991 | 99.99th=[11338] 00:10:41.991 bw ( KiB/s): min=54544, max=66432, per=100.00%, avg=59453.33, stdev=6208.28, samples=3 00:10:41.991 iops : min=13636, max=16608, avg=14863.33, stdev=1552.07, samples=3 00:10:41.991 write: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(114MiB/2001msec); 0 zone resets 00:10:41.991 slat (nsec): min=4697, max=88922, avg=6929.52, stdev=4082.53 00:10:41.991 clat (usec): min=717, max=11265, avg=4390.43, stdev=1593.39 00:10:41.991 lat (usec): min=722, max=11278, avg=4397.36, stdev=1594.87 00:10:41.991 clat percentiles (usec): 00:10:41.991 | 1.00th=[ 2376], 5.00th=[ 2606], 10.00th=[ 2802], 20.00th=[ 2999], 00:10:41.991 | 30.00th=[ 3163], 40.00th=[ 3392], 50.00th=[ 3818], 60.00th=[ 4555], 00:10:41.991 | 70.00th=[ 5276], 80.00th=[ 5866], 90.00th=[ 6652], 95.00th=[ 7439], 00:10:41.991 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[ 9634], 99.95th=[10159], 00:10:41.991 | 99.99th=[10552] 00:10:41.991 bw ( KiB/s): min=54048, max=66656, per=100.00%, avg=59253.33, stdev=6584.95, samples=3 00:10:41.991 iops : min=13512, max=16664, avg=14813.33, stdev=1646.24, samples=3 00:10:41.991 lat (usec) : 750=0.01%, 1000=0.05% 00:10:41.991 lat (msec) : 2=0.38%, 4=52.88%, 10=46.63%, 20=0.05% 00:10:41.991 cpu : usr=98.45%, sys=0.15%, ctx=5, majf=0, minf=609 00:10:41.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:41.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.992 issued rwts: total=29225,29293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.992 00:10:41.992 Run status group 0 (all jobs): 00:10:41.992 READ: bw=57.1MiB/s (59.8MB/s), 57.1MiB/s-57.1MiB/s (59.8MB/s-59.8MB/s), io=114MiB (120MB), run=2001-2001msec 00:10:41.992 WRITE: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=114MiB (120MB), run=2001-2001msec 00:10:41.992 ----------------------------------------------------- 00:10:41.992 Suppressions used: 00:10:41.992 count bytes template 00:10:41.992 1 32 /usr/src/fio/parse.c 00:10:41.992 1 8 libtcmalloc_minimal.so 00:10:41.992 ----------------------------------------------------- 00:10:41.992 00:10:41.992 13:14:55 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:41.992 13:14:55 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:41.992 13:14:55 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:41.992 13:14:55 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:41.992 13:14:56 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:41.992 13:14:56 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:41.992 13:14:56 -- nvme/nvme.sh@41 -- # bs=4096 00:10:41.992 13:14:56 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:41.992 13:14:56 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:41.992 13:14:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:41.992 13:14:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:41.992 13:14:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:41.992 13:14:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:41.992 13:14:56 -- common/autotest_common.sh@1330 -- # shift 00:10:41.992 13:14:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:41.992 13:14:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:41.992 13:14:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:41.992 13:14:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:41.992 13:14:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:41.992 13:14:56 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:41.992 13:14:56 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:41.992 13:14:56 -- common/autotest_common.sh@1336 -- # break 00:10:41.992 13:14:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:41.992 13:14:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:41.992 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:41.992 fio-3.35 00:10:41.992 Starting 1 thread 00:10:50.140 00:10:50.141 test: (groupid=0, jobs=1): err= 0: pid=64978: Mon Dec 16 13:15:03 2024 00:10:50.141 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:10:50.141 slat (nsec): min=4803, max=97508, avg=6663.35, stdev=3915.66 00:10:50.141 clat (usec): min=609, max=12269, avg=4015.55, stdev=1381.62 00:10:50.141 lat (usec): min=614, max=12319, avg=4022.21, stdev=1383.04 00:10:50.141 clat percentiles (usec): 00:10:50.141 | 1.00th=[ 2114], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 2999], 00:10:50.141 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3458], 60.00th=[ 3752], 00:10:50.141 | 70.00th=[ 4424], 80.00th=[ 5211], 90.00th=[ 6128], 95.00th=[ 6783], 00:10:50.141 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[10421], 00:10:50.141 | 99.99th=[12125] 00:10:50.141 bw ( KiB/s): min=59544, max=65984, per=100.00%, avg=63704.00, stdev=3608.21, samples=3 00:10:50.141 iops : min=14886, max=16496, avg=15926.00, stdev=902.05, samples=3 00:10:50.141 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:10:50.141 slat (nsec): min=4919, max=91412, avg=6812.73, stdev=3878.22 00:10:50.141 clat (usec): min=617, max=12166, avg=4057.78, stdev=1387.10 00:10:50.141 lat (usec): min=623, max=12175, avg=4064.59, stdev=1388.51 00:10:50.141 clat percentiles (usec): 00:10:50.141 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 3032], 00:10:50.141 | 30.00th=[ 3163], 40.00th=[ 3294], 50.00th=[ 3490], 60.00th=[ 3818], 00:10:50.141 | 70.00th=[ 4490], 80.00th=[ 5211], 90.00th=[ 6194], 95.00th=[ 6849], 00:10:50.141 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[10421], 00:10:50.141 | 99.99th=[11994] 00:10:50.141 bw ( KiB/s): min=59840, max=65256, per=100.00%, avg=63424.00, stdev=3104.09, samples=3 00:10:50.141 iops : min=14960, max=16314, avg=15856.00, stdev=776.02, samples=3 00:10:50.141 lat (usec) : 750=0.02%, 1000=0.01% 00:10:50.141 lat (msec) : 2=0.68%, 4=63.18%, 10=36.04%, 20=0.08% 00:10:50.141 cpu : usr=98.50%, sys=0.15%, ctx=3, majf=0, minf=606 00:10:50.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:50.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.141 issued rwts: total=31601,31639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.141 00:10:50.141 Run status group 0 (all jobs): 00:10:50.141 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:10:50.141 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:10:50.141 ----------------------------------------------------- 00:10:50.141 Suppressions used: 00:10:50.141 count bytes template 00:10:50.141 1 32 /usr/src/fio/parse.c 00:10:50.141 1 8 libtcmalloc_minimal.so 00:10:50.141 ----------------------------------------------------- 00:10:50.141 00:10:50.141 ************************************ 00:10:50.141 END TEST nvme_fio 00:10:50.141 ************************************ 00:10:50.141 13:15:03 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:50.141 13:15:03 -- nvme/nvme.sh@46 -- # true 00:10:50.141 00:10:50.141 real 0m25.365s 00:10:50.141 user 0m17.053s 00:10:50.141 sys 0m13.401s 00:10:50.141 13:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.141 13:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 ************************************ 00:10:50.141 END TEST nvme 00:10:50.141 ************************************ 00:10:50.141 00:10:50.141 real 1m39.214s 00:10:50.141 user 3m42.602s 00:10:50.141 sys 0m23.761s 00:10:50.141 13:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.141 13:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 13:15:03 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:50.141 13:15:03 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:50.141 13:15:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:50.141 13:15:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.141 13:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 ************************************ 00:10:50.141 START TEST nvme_scc 00:10:50.141 ************************************ 00:10:50.141 13:15:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:50.141 * Looking for test storage... 00:10:50.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.141 13:15:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:50.141 13:15:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:50.141 13:15:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:50.141 13:15:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:50.141 13:15:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:50.141 13:15:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:50.141 13:15:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:50.141 13:15:03 -- scripts/common.sh@335 -- # IFS=.-: 00:10:50.141 13:15:03 -- scripts/common.sh@335 -- # read -ra ver1 00:10:50.141 13:15:03 -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.141 13:15:03 -- scripts/common.sh@336 -- # read -ra ver2 00:10:50.141 13:15:03 -- scripts/common.sh@337 -- # local 'op=<' 00:10:50.141 13:15:03 -- scripts/common.sh@339 -- # ver1_l=2 00:10:50.141 13:15:03 -- scripts/common.sh@340 -- # ver2_l=1 00:10:50.141 13:15:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:50.141 13:15:03 -- scripts/common.sh@343 -- # case "$op" in 00:10:50.141 13:15:03 -- scripts/common.sh@344 -- # : 1 00:10:50.141 13:15:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:50.141 13:15:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.141 13:15:03 -- scripts/common.sh@364 -- # decimal 1 00:10:50.141 13:15:03 -- scripts/common.sh@352 -- # local d=1 00:10:50.141 13:15:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.141 13:15:03 -- scripts/common.sh@354 -- # echo 1 00:10:50.141 13:15:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:50.141 13:15:03 -- scripts/common.sh@365 -- # decimal 2 00:10:50.141 13:15:03 -- scripts/common.sh@352 -- # local d=2 00:10:50.141 13:15:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.141 13:15:03 -- scripts/common.sh@354 -- # echo 2 00:10:50.141 13:15:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:50.141 13:15:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:50.141 13:15:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:50.141 13:15:03 -- scripts/common.sh@367 -- # return 0 00:10:50.141 13:15:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.141 13:15:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:50.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.141 --rc genhtml_branch_coverage=1 00:10:50.141 --rc genhtml_function_coverage=1 00:10:50.141 --rc genhtml_legend=1 00:10:50.141 --rc geninfo_all_blocks=1 00:10:50.141 --rc geninfo_unexecuted_blocks=1 00:10:50.141 00:10:50.141 ' 00:10:50.141 13:15:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:50.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.141 --rc genhtml_branch_coverage=1 00:10:50.141 --rc genhtml_function_coverage=1 00:10:50.141 --rc genhtml_legend=1 00:10:50.141 --rc geninfo_all_blocks=1 00:10:50.141 --rc geninfo_unexecuted_blocks=1 00:10:50.141 00:10:50.141 ' 00:10:50.141 13:15:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:50.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.141 --rc genhtml_branch_coverage=1 00:10:50.141 --rc genhtml_function_coverage=1 00:10:50.141 --rc genhtml_legend=1 00:10:50.141 --rc geninfo_all_blocks=1 00:10:50.141 --rc geninfo_unexecuted_blocks=1 00:10:50.141 00:10:50.141 ' 00:10:50.141 13:15:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:50.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.141 --rc genhtml_branch_coverage=1 00:10:50.141 --rc genhtml_function_coverage=1 00:10:50.141 --rc genhtml_legend=1 00:10:50.141 --rc geninfo_all_blocks=1 00:10:50.141 --rc geninfo_unexecuted_blocks=1 00:10:50.141 00:10:50.141 ' 00:10:50.141 13:15:03 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:50.141 13:15:03 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:50.141 13:15:03 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:50.141 13:15:03 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:50.141 13:15:03 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.141 13:15:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.141 13:15:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.141 13:15:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.141 13:15:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.141 13:15:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.141 13:15:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.141 13:15:03 -- paths/export.sh@5 -- # export PATH 00:10:50.141 13:15:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.141 13:15:03 -- nvme/functions.sh@10 -- # ctrls=() 00:10:50.141 13:15:03 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:50.141 13:15:03 -- nvme/functions.sh@11 -- # nvmes=() 00:10:50.141 13:15:03 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:50.142 13:15:03 -- nvme/functions.sh@12 -- # bdfs=() 00:10:50.142 13:15:03 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:50.142 13:15:03 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:50.142 13:15:03 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:50.142 13:15:03 -- nvme/functions.sh@14 -- # nvme_name= 00:10:50.142 13:15:03 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.142 13:15:03 -- nvme/nvme_scc.sh@12 -- # uname 00:10:50.142 13:15:03 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:50.142 13:15:03 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:50.142 13:15:03 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:50.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.142 Waiting for block devices as requested 00:10:50.142 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.142 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.142 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.142 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.472 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:55.472 13:15:09 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:55.472 13:15:09 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:55.472 13:15:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:55.472 13:15:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:55.472 13:15:09 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:55.472 13:15:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:55.472 13:15:09 -- scripts/common.sh@15 -- # local i 00:10:55.472 13:15:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:55.472 13:15:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:55.472 13:15:09 -- scripts/common.sh@24 -- # return 0 00:10:55.472 13:15:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:55.472 13:15:09 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:55.472 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:55.472 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.472 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.472 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:55.472 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.472 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:55.472 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:55.472 13:15:09 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.472 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:55.472 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:55.472 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:55.472 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:55.473 13:15:09 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.473 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.473 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.474 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:55.474 13:15:09 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.474 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:55.475 13:15:09 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.475 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.475 13:15:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:55.475 13:15:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:55.475 13:15:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:55.475 13:15:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:55.475 13:15:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:55.475 13:15:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:55.475 13:15:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:55.475 13:15:09 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:55.475 13:15:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:55.475 13:15:09 -- scripts/common.sh@15 -- # local i 00:10:55.475 13:15:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:55.475 13:15:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:55.475 13:15:09 -- scripts/common.sh@24 -- # return 0 00:10:55.476 13:15:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:55.476 13:15:09 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:55.476 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.476 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:55.476 13:15:09 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.476 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.476 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.477 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:55.477 13:15:09 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.477 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.478 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.478 13:15:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:55.478 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:55.479 13:15:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:55.479 13:15:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:55.479 13:15:09 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:55.479 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.479 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:55.479 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.479 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.479 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:55.480 13:15:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:55.480 13:15:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:55.480 13:15:09 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:55.480 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.480 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.480 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:55.480 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:55.480 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.481 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.481 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:55.481 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:55.482 13:15:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:55.482 13:15:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:55.482 13:15:09 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:55.482 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.482 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.482 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:55.482 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:55.482 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.483 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:55.483 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.483 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:55.484 13:15:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:55.484 13:15:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:55.484 13:15:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:55.484 13:15:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:55.484 13:15:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:55.484 13:15:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:55.484 13:15:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:55.484 13:15:09 -- scripts/common.sh@15 -- # local i 00:10:55.484 13:15:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:55.484 13:15:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:55.484 13:15:09 -- scripts/common.sh@24 -- # return 0 00:10:55.484 13:15:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:55.484 13:15:09 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:55.484 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.484 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:55.484 13:15:09 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.484 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.484 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.485 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.485 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.485 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.486 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.486 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:55.486 13:15:09 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:55.487 13:15:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:55.487 13:15:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:55.487 13:15:09 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:55.487 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.487 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:55.487 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.487 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.487 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:55.488 13:15:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.488 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:55.488 13:15:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:55.488 13:15:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:55.488 13:15:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:55.488 13:15:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:55.488 13:15:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:55.488 13:15:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:55.488 13:15:09 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:55.488 13:15:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:55.488 13:15:09 -- scripts/common.sh@15 -- # local i 00:10:55.488 13:15:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:55.488 13:15:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:55.488 13:15:09 -- scripts/common.sh@24 -- # return 0 00:10:55.488 13:15:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:55.488 13:15:09 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:55.488 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:55.488 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.488 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:55.489 13:15:09 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.489 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.489 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.490 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.490 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.490 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:55.491 13:15:09 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.491 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.491 13:15:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:55.492 13:15:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:55.492 13:15:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:55.492 13:15:09 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:55.492 13:15:09 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@18 -- # shift 00:10:55.492 13:15:09 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:55.492 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.492 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.492 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:55.493 13:15:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # IFS=: 00:10:55.493 13:15:09 -- nvme/functions.sh@21 -- # read -r reg val 00:10:55.493 13:15:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:55.493 13:15:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:55.493 13:15:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:55.493 13:15:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:55.493 13:15:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:55.493 13:15:09 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:55.493 13:15:09 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:55.493 13:15:09 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:55.493 13:15:09 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:55.493 13:15:09 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:55.493 13:15:09 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:55.493 13:15:09 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:55.493 13:15:09 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:55.493 13:15:09 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:55.493 13:15:09 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:55.493 13:15:09 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:55.493 13:15:09 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:55.493 13:15:09 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:55.493 13:15:09 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:55.493 13:15:10 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:55.493 13:15:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:55.493 13:15:10 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:55.493 13:15:10 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:55.493 13:15:10 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:55.493 13:15:10 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:55.493 13:15:10 -- nvme/functions.sh@197 -- # echo nvme1 00:10:55.493 13:15:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:55.493 13:15:10 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:55.493 13:15:10 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:55.493 13:15:10 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:55.493 13:15:10 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:55.493 13:15:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:55.493 13:15:10 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:55.493 13:15:10 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:55.493 13:15:10 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:55.493 13:15:10 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:55.493 13:15:10 -- nvme/functions.sh@197 -- # echo nvme0 00:10:55.493 13:15:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:55.493 13:15:10 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:55.493 13:15:10 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:55.493 13:15:10 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:55.493 13:15:10 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:55.493 13:15:10 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:55.493 13:15:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:55.493 13:15:10 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:55.493 13:15:10 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:55.493 13:15:10 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:55.493 13:15:10 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:55.493 13:15:10 -- nvme/functions.sh@197 -- # echo nvme3 00:10:55.494 13:15:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:55.494 13:15:10 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:55.494 13:15:10 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:55.494 13:15:10 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:55.494 13:15:10 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:55.494 13:15:10 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:55.494 13:15:10 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:55.494 13:15:10 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:55.494 13:15:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:55.494 13:15:10 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:55.494 13:15:10 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:55.494 13:15:10 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:55.494 13:15:10 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:55.494 13:15:10 -- nvme/functions.sh@197 -- # echo nvme2 00:10:55.494 13:15:10 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:55.494 13:15:10 -- nvme/functions.sh@206 -- # echo nvme1 00:10:55.494 13:15:10 -- nvme/functions.sh@207 -- # return 0 00:10:55.494 13:15:10 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:55.494 13:15:10 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:55.494 13:15:10 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:56.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.700 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.700 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.700 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.700 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.700 13:15:11 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:56.700 13:15:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:56.700 13:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.700 13:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:56.700 ************************************ 00:10:56.700 START TEST nvme_simple_copy 00:10:56.700 ************************************ 00:10:56.700 13:15:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:56.962 Initializing NVMe Controllers 00:10:56.962 Attaching to 0000:00:08.0 00:10:56.962 Controller supports SCC. Attached to 0000:00:08.0 00:10:56.962 Namespace ID: 1 size: 4GB 00:10:56.962 Initialization complete. 00:10:56.962 00:10:56.962 Controller QEMU NVMe Ctrl (12342 ) 00:10:56.962 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:56.962 Namespace Block Size:4096 00:10:56.962 Writing LBAs 0 to 63 with Random Data 00:10:56.962 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:56.962 LBAs matching Written Data: 64 00:10:56.962 00:10:56.962 real 0m0.271s 00:10:56.962 user 0m0.102s 00:10:56.962 sys 0m0.066s 00:10:56.962 13:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:56.962 ************************************ 00:10:56.962 END TEST nvme_simple_copy 00:10:56.962 ************************************ 00:10:56.962 13:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:57.223 ************************************ 00:10:57.223 END TEST nvme_scc 00:10:57.223 ************************************ 00:10:57.223 00:10:57.223 real 0m7.939s 00:10:57.223 user 0m1.120s 00:10:57.223 sys 0m1.531s 00:10:57.223 13:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:57.223 13:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:57.223 13:15:11 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:57.223 13:15:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:57.223 13:15:11 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:57.223 13:15:11 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:57.223 13:15:11 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:57.223 13:15:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.224 13:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.224 13:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:57.224 ************************************ 00:10:57.224 START TEST nvme_fdp 00:10:57.224 ************************************ 00:10:57.224 13:15:11 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:57.224 * Looking for test storage... 00:10:57.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:57.224 13:15:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:57.224 13:15:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:57.224 13:15:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:57.224 13:15:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:57.224 13:15:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:57.224 13:15:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:57.224 13:15:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:57.224 13:15:11 -- scripts/common.sh@335 -- # IFS=.-: 00:10:57.224 13:15:11 -- scripts/common.sh@335 -- # read -ra ver1 00:10:57.224 13:15:11 -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.224 13:15:11 -- scripts/common.sh@336 -- # read -ra ver2 00:10:57.224 13:15:11 -- scripts/common.sh@337 -- # local 'op=<' 00:10:57.224 13:15:11 -- scripts/common.sh@339 -- # ver1_l=2 00:10:57.224 13:15:11 -- scripts/common.sh@340 -- # ver2_l=1 00:10:57.224 13:15:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:57.224 13:15:11 -- scripts/common.sh@343 -- # case "$op" in 00:10:57.224 13:15:11 -- scripts/common.sh@344 -- # : 1 00:10:57.224 13:15:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:57.224 13:15:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.224 13:15:11 -- scripts/common.sh@364 -- # decimal 1 00:10:57.224 13:15:11 -- scripts/common.sh@352 -- # local d=1 00:10:57.224 13:15:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.224 13:15:11 -- scripts/common.sh@354 -- # echo 1 00:10:57.224 13:15:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:57.224 13:15:11 -- scripts/common.sh@365 -- # decimal 2 00:10:57.224 13:15:11 -- scripts/common.sh@352 -- # local d=2 00:10:57.224 13:15:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.224 13:15:11 -- scripts/common.sh@354 -- # echo 2 00:10:57.224 13:15:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:57.224 13:15:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:57.224 13:15:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:57.224 13:15:11 -- scripts/common.sh@367 -- # return 0 00:10:57.224 13:15:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.224 13:15:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:57.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.224 --rc genhtml_branch_coverage=1 00:10:57.224 --rc genhtml_function_coverage=1 00:10:57.224 --rc genhtml_legend=1 00:10:57.224 --rc geninfo_all_blocks=1 00:10:57.224 --rc geninfo_unexecuted_blocks=1 00:10:57.224 00:10:57.224 ' 00:10:57.224 13:15:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:57.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.224 --rc genhtml_branch_coverage=1 00:10:57.224 --rc genhtml_function_coverage=1 00:10:57.224 --rc genhtml_legend=1 00:10:57.224 --rc geninfo_all_blocks=1 00:10:57.224 --rc geninfo_unexecuted_blocks=1 00:10:57.224 00:10:57.224 ' 00:10:57.224 13:15:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:57.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.224 --rc genhtml_branch_coverage=1 00:10:57.224 --rc genhtml_function_coverage=1 00:10:57.224 --rc genhtml_legend=1 00:10:57.224 --rc geninfo_all_blocks=1 00:10:57.224 --rc geninfo_unexecuted_blocks=1 00:10:57.224 00:10:57.224 ' 00:10:57.224 13:15:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:57.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.224 --rc genhtml_branch_coverage=1 00:10:57.224 --rc genhtml_function_coverage=1 00:10:57.224 --rc genhtml_legend=1 00:10:57.224 --rc geninfo_all_blocks=1 00:10:57.224 --rc geninfo_unexecuted_blocks=1 00:10:57.224 00:10:57.224 ' 00:10:57.224 13:15:11 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:57.224 13:15:11 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:57.224 13:15:11 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:57.224 13:15:11 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:57.224 13:15:11 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.224 13:15:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.224 13:15:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.224 13:15:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.224 13:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.224 13:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.224 13:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.224 13:15:11 -- paths/export.sh@5 -- # export PATH 00:10:57.224 13:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.224 13:15:11 -- nvme/functions.sh@10 -- # ctrls=() 00:10:57.224 13:15:11 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:57.224 13:15:11 -- nvme/functions.sh@11 -- # nvmes=() 00:10:57.224 13:15:11 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:57.224 13:15:11 -- nvme/functions.sh@12 -- # bdfs=() 00:10:57.224 13:15:11 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:57.224 13:15:11 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:57.224 13:15:11 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:57.224 13:15:11 -- nvme/functions.sh@14 -- # nvme_name= 00:10:57.224 13:15:11 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.224 13:15:11 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:57.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.795 Waiting for block devices as requested 00:10:57.795 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.795 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.055 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.055 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.359 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:03.359 13:15:17 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:03.359 13:15:17 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:03.359 13:15:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.359 13:15:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:03.359 13:15:17 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:03.359 13:15:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:03.359 13:15:17 -- scripts/common.sh@15 -- # local i 00:11:03.359 13:15:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:03.359 13:15:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.359 13:15:17 -- scripts/common.sh@24 -- # return 0 00:11:03.359 13:15:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:03.359 13:15:17 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:03.359 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:03.359 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.359 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:03.359 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.359 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.359 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:03.359 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.359 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.359 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.359 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.359 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:03.359 13:15:17 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:03.359 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.359 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.359 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.359 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.360 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.360 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:03.360 13:15:17 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.361 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.361 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:03.361 13:15:17 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:03.362 13:15:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:03.362 13:15:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:03.362 13:15:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:03.362 13:15:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:03.362 13:15:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.362 13:15:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:03.362 13:15:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:03.362 13:15:17 -- scripts/common.sh@15 -- # local i 00:11:03.362 13:15:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:03.362 13:15:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.362 13:15:17 -- scripts/common.sh@24 -- # return 0 00:11:03.362 13:15:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:03.362 13:15:17 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:03.362 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.362 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:03.362 13:15:17 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.362 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.362 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.363 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:03.363 13:15:17 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.363 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:03.364 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.364 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.364 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:03.365 13:15:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.365 13:15:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:03.365 13:15:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:03.365 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.365 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:03.365 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.365 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.365 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.366 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.366 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.366 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:03.382 13:15:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.382 13:15:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:03.382 13:15:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:03.382 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.382 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.382 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:03.382 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.382 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.383 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.383 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.383 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:03.384 13:15:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.384 13:15:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:03.384 13:15:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:03.384 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.384 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.384 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:03.384 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.384 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:03.385 13:15:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:03.385 13:15:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:03.385 13:15:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:03.385 13:15:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:03.385 13:15:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.385 13:15:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:03.385 13:15:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:03.385 13:15:17 -- scripts/common.sh@15 -- # local i 00:11:03.385 13:15:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:03.385 13:15:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.385 13:15:17 -- scripts/common.sh@24 -- # return 0 00:11:03.385 13:15:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:03.385 13:15:17 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:03.385 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.385 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.385 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.385 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:03.385 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:03.386 13:15:17 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.386 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.386 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.387 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:03.387 13:15:17 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.387 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:03.388 13:15:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.388 13:15:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:03.388 13:15:17 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:03.388 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.388 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:03.388 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.388 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.388 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:03.389 13:15:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:03.389 13:15:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:03.389 13:15:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:03.389 13:15:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:03.389 13:15:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.389 13:15:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:03.389 13:15:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:03.389 13:15:17 -- scripts/common.sh@15 -- # local i 00:11:03.389 13:15:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:03.389 13:15:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.389 13:15:17 -- scripts/common.sh@24 -- # return 0 00:11:03.389 13:15:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:03.389 13:15:17 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:03.389 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.389 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:03.389 13:15:17 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.389 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.389 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.390 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:03.390 13:15:17 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.390 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.391 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:03.391 13:15:17 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:03.391 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:03.392 13:15:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.392 13:15:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:03.392 13:15:17 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:03.392 13:15:17 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@18 -- # shift 00:11:03.392 13:15:17 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.392 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:03.392 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:03.392 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.393 13:15:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.393 13:15:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.393 13:15:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:03.393 13:15:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:03.393 13:15:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:03.393 13:15:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:03.393 13:15:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:03.393 13:15:17 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:03.393 13:15:17 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:03.393 13:15:17 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:03.393 13:15:17 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:03.393 13:15:17 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:03.393 13:15:17 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:03.393 13:15:17 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:03.393 13:15:17 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:03.393 13:15:17 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.393 13:15:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:03.393 13:15:17 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:03.393 13:15:17 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:03.393 13:15:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:03.393 13:15:17 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:03.393 13:15:17 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:03.393 13:15:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.393 13:15:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.393 13:15:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:03.393 13:15:17 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:03.393 13:15:17 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:03.393 13:15:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:03.393 13:15:17 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:03.393 13:15:17 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:03.393 13:15:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.393 13:15:17 -- nvme/functions.sh@197 -- # echo nvme0 00:11:03.393 13:15:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.393 13:15:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:03.393 13:15:17 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:03.393 13:15:17 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:03.393 13:15:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:03.393 13:15:17 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:03.393 13:15:17 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:03.393 13:15:17 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:03.393 13:15:17 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:03.393 13:15:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.393 13:15:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.394 13:15:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:03.394 13:15:17 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:03.394 13:15:17 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:03.394 13:15:17 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:03.394 13:15:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:03.394 13:15:17 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:03.394 13:15:17 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:03.394 13:15:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:03.394 13:15:17 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:03.394 13:15:17 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:03.394 13:15:17 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:03.394 13:15:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:03.394 13:15:17 -- nvme/functions.sh@204 -- # trap - ERR 00:11:03.394 13:15:17 -- nvme/functions.sh@204 -- # print_backtrace 00:11:03.394 13:15:17 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:03.394 13:15:17 -- common/autotest_common.sh@1142 -- # return 0 00:11:03.655 13:15:17 -- nvme/functions.sh@204 -- # trap - ERR 00:11:03.655 13:15:17 -- nvme/functions.sh@204 -- # print_backtrace 00:11:03.655 13:15:17 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:03.655 13:15:17 -- common/autotest_common.sh@1142 -- # return 0 00:11:03.655 13:15:17 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:03.655 13:15:17 -- nvme/functions.sh@206 -- # echo nvme0 00:11:03.655 13:15:17 -- nvme/functions.sh@207 -- # return 0 00:11:03.656 13:15:17 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:03.656 13:15:17 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:03.656 13:15:17 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:04.488 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.488 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.488 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.488 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.488 13:15:19 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:04.488 13:15:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:04.488 13:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.488 13:15:19 -- common/autotest_common.sh@10 -- # set +x 00:11:04.749 ************************************ 00:11:04.749 START TEST nvme_flexible_data_placement 00:11:04.749 ************************************ 00:11:04.749 13:15:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:04.749 Initializing NVMe Controllers 00:11:04.749 Attaching to 0000:00:09.0 00:11:04.749 Controller supports FDP Attached to 0000:00:09.0 00:11:04.749 Namespace ID: 1 Endurance Group ID: 1 00:11:04.749 Initialization complete. 00:11:04.749 00:11:04.749 ================================== 00:11:04.749 == FDP tests for Namespace: #01 == 00:11:04.749 ================================== 00:11:04.749 00:11:04.749 Get Feature: FDP: 00:11:04.749 ================= 00:11:04.749 Enabled: Yes 00:11:04.749 FDP configuration Index: 0 00:11:04.749 00:11:04.749 FDP configurations log page 00:11:04.749 =========================== 00:11:04.749 Number of FDP configurations: 1 00:11:04.749 Version: 0 00:11:04.749 Size: 112 00:11:04.749 FDP Configuration Descriptor: 0 00:11:04.749 Descriptor Size: 96 00:11:04.749 Reclaim Group Identifier format: 2 00:11:04.749 FDP Volatile Write Cache: Not Present 00:11:04.749 FDP Configuration: Valid 00:11:04.749 Vendor Specific Size: 0 00:11:04.749 Number of Reclaim Groups: 2 00:11:04.749 Number of Recalim Unit Handles: 8 00:11:04.749 Max Placement Identifiers: 128 00:11:04.749 Number of Namespaces Suppprted: 256 00:11:04.749 Reclaim unit Nominal Size: 6000000 bytes 00:11:04.749 Estimated Reclaim Unit Time Limit: Not Reported 00:11:04.749 RUH Desc #000: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #001: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #002: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #003: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #004: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #005: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #006: RUH Type: Initially Isolated 00:11:04.749 RUH Desc #007: RUH Type: Initially Isolated 00:11:04.749 00:11:04.749 FDP reclaim unit handle usage log page 00:11:04.749 ====================================== 00:11:04.749 Number of Reclaim Unit Handles: 8 00:11:04.749 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:04.749 RUH Usage Desc #001: RUH Attributes: Unused 00:11:04.749 RUH Usage Desc #002: RUH Attributes: Unused 00:11:04.749 RUH Usage Desc #003: RUH Attributes: Unused 00:11:04.749 RUH Usage Desc #004: RUH Attributes: Unused 00:11:04.749 RUH Usage Desc #005: RUH Attributes: Unused 00:11:04.749 RUH Usage Desc #006: RUH Attributes: Unused 00:11:04.749 RUH Usage Desc #007: RUH Attributes: Unused 00:11:04.749 00:11:04.749 FDP statistics log page 00:11:04.749 ======================= 00:11:04.749 Host bytes with metadata written: 979468288 00:11:04.749 Media bytes with metadata written: 979861504 00:11:04.749 Media bytes erased: 0 00:11:04.749 00:11:04.749 FDP Reclaim unit handle status 00:11:04.749 ============================== 00:11:04.749 Number of RUHS descriptors: 2 00:11:04.749 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000019e8 00:11:04.749 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:04.749 00:11:04.749 FDP write on placement id: 0 success 00:11:04.749 00:11:04.749 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:04.749 00:11:04.749 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:04.749 00:11:04.749 Get Feature: FDP Events for Placement handle: #0 00:11:04.749 ======================== 00:11:04.749 Number of FDP Events: 6 00:11:04.749 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:04.749 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:04.749 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:04.749 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:04.749 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:04.749 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:04.749 00:11:04.749 FDP events log page 00:11:04.749 =================== 00:11:04.749 Number of FDP events: 1 00:11:04.749 FDP Event #0: 00:11:04.750 Event Type: RU Not Written to Capacity 00:11:04.750 Placement Identifier: Valid 00:11:04.750 NSID: Valid 00:11:04.750 Location: Valid 00:11:04.750 Placement Identifier: 0 00:11:04.750 Event Timestamp: 9 00:11:04.750 Namespace Identifier: 1 00:11:04.750 Reclaim Group Identifier: 0 00:11:04.750 Reclaim Unit Handle Identifier: 0 00:11:04.750 00:11:04.750 FDP test passed 00:11:04.750 00:11:04.750 real 0m0.231s 00:11:04.750 user 0m0.067s 00:11:04.750 sys 0m0.061s 00:11:04.750 13:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.750 13:15:19 -- common/autotest_common.sh@10 -- # set +x 00:11:04.750 ************************************ 00:11:04.750 END TEST nvme_flexible_data_placement 00:11:04.750 ************************************ 00:11:05.010 00:11:05.010 real 0m7.740s 00:11:05.010 user 0m1.073s 00:11:05.010 sys 0m1.434s 00:11:05.010 13:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.010 ************************************ 00:11:05.010 END TEST nvme_fdp 00:11:05.010 ************************************ 00:11:05.010 13:15:19 -- common/autotest_common.sh@10 -- # set +x 00:11:05.010 13:15:19 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:05.010 13:15:19 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:05.010 13:15:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.010 13:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.010 13:15:19 -- common/autotest_common.sh@10 -- # set +x 00:11:05.010 ************************************ 00:11:05.010 START TEST nvme_rpc 00:11:05.010 ************************************ 00:11:05.010 13:15:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:05.010 * Looking for test storage... 00:11:05.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:05.010 13:15:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.010 13:15:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.010 13:15:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.010 13:15:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.010 13:15:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.010 13:15:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.010 13:15:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.010 13:15:19 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.010 13:15:19 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.010 13:15:19 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.010 13:15:19 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.010 13:15:19 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.010 13:15:19 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.010 13:15:19 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.010 13:15:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.010 13:15:19 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.010 13:15:19 -- scripts/common.sh@344 -- # : 1 00:11:05.010 13:15:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.010 13:15:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.010 13:15:19 -- scripts/common.sh@364 -- # decimal 1 00:11:05.010 13:15:19 -- scripts/common.sh@352 -- # local d=1 00:11:05.010 13:15:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.010 13:15:19 -- scripts/common.sh@354 -- # echo 1 00:11:05.011 13:15:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.011 13:15:19 -- scripts/common.sh@365 -- # decimal 2 00:11:05.011 13:15:19 -- scripts/common.sh@352 -- # local d=2 00:11:05.011 13:15:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.011 13:15:19 -- scripts/common.sh@354 -- # echo 2 00:11:05.011 13:15:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.011 13:15:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.011 13:15:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.011 13:15:19 -- scripts/common.sh@367 -- # return 0 00:11:05.011 13:15:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.011 13:15:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.011 --rc genhtml_branch_coverage=1 00:11:05.011 --rc genhtml_function_coverage=1 00:11:05.011 --rc genhtml_legend=1 00:11:05.011 --rc geninfo_all_blocks=1 00:11:05.011 --rc geninfo_unexecuted_blocks=1 00:11:05.011 00:11:05.011 ' 00:11:05.011 13:15:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.011 --rc genhtml_branch_coverage=1 00:11:05.011 --rc genhtml_function_coverage=1 00:11:05.011 --rc genhtml_legend=1 00:11:05.011 --rc geninfo_all_blocks=1 00:11:05.011 --rc geninfo_unexecuted_blocks=1 00:11:05.011 00:11:05.011 ' 00:11:05.011 13:15:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.011 --rc genhtml_branch_coverage=1 00:11:05.011 --rc genhtml_function_coverage=1 00:11:05.011 --rc genhtml_legend=1 00:11:05.011 --rc geninfo_all_blocks=1 00:11:05.011 --rc geninfo_unexecuted_blocks=1 00:11:05.011 00:11:05.011 ' 00:11:05.011 13:15:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.011 --rc genhtml_branch_coverage=1 00:11:05.011 --rc genhtml_function_coverage=1 00:11:05.011 --rc genhtml_legend=1 00:11:05.011 --rc geninfo_all_blocks=1 00:11:05.011 --rc geninfo_unexecuted_blocks=1 00:11:05.011 00:11:05.011 ' 00:11:05.011 13:15:19 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.011 13:15:19 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:05.011 13:15:19 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:05.011 13:15:19 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:05.011 13:15:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:05.011 13:15:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:05.011 13:15:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:05.011 13:15:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:05.011 13:15:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:05.011 13:15:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:05.011 13:15:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:05.272 13:15:19 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:05.272 13:15:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:05.272 13:15:19 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:05.272 13:15:19 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:05.272 13:15:19 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66396 00:11:05.272 13:15:19 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:05.272 13:15:19 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66396 00:11:05.272 13:15:19 -- common/autotest_common.sh@829 -- # '[' -z 66396 ']' 00:11:05.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.272 13:15:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.272 13:15:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.272 13:15:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.272 13:15:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.272 13:15:19 -- common/autotest_common.sh@10 -- # set +x 00:11:05.272 13:15:19 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:05.272 [2024-12-16 13:15:19.696536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.272 [2024-12-16 13:15:19.696697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66396 ] 00:11:05.533 [2024-12-16 13:15:19.848384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:05.533 [2024-12-16 13:15:20.083325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:05.533 [2024-12-16 13:15:20.084116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.533 [2024-12-16 13:15:20.084286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.920 13:15:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.920 13:15:21 -- common/autotest_common.sh@862 -- # return 0 00:11:06.920 13:15:21 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:06.920 Nvme0n1 00:11:06.920 13:15:21 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:06.920 13:15:21 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:07.181 request: 00:11:07.181 { 00:11:07.181 "filename": "non_existing_file", 00:11:07.181 "bdev_name": "Nvme0n1", 00:11:07.181 "method": "bdev_nvme_apply_firmware", 00:11:07.181 "req_id": 1 00:11:07.181 } 00:11:07.181 Got JSON-RPC error response 00:11:07.181 response: 00:11:07.181 { 00:11:07.181 "code": -32603, 00:11:07.181 "message": "open file failed." 00:11:07.181 } 00:11:07.181 13:15:21 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:07.181 13:15:21 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:07.181 13:15:21 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:07.440 13:15:21 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:07.440 13:15:21 -- nvme/nvme_rpc.sh@40 -- # killprocess 66396 00:11:07.440 13:15:21 -- common/autotest_common.sh@936 -- # '[' -z 66396 ']' 00:11:07.440 13:15:21 -- common/autotest_common.sh@940 -- # kill -0 66396 00:11:07.440 13:15:21 -- common/autotest_common.sh@941 -- # uname 00:11:07.440 13:15:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.440 13:15:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66396 00:11:07.440 killing process with pid 66396 00:11:07.440 13:15:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:07.440 13:15:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:07.440 13:15:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66396' 00:11:07.440 13:15:21 -- common/autotest_common.sh@955 -- # kill 66396 00:11:07.440 13:15:21 -- common/autotest_common.sh@960 -- # wait 66396 00:11:08.812 ************************************ 00:11:08.812 END TEST nvme_rpc 00:11:08.812 ************************************ 00:11:08.812 00:11:08.812 real 0m3.855s 00:11:08.812 user 0m7.260s 00:11:08.812 sys 0m0.623s 00:11:08.812 13:15:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.812 13:15:23 -- common/autotest_common.sh@10 -- # set +x 00:11:08.812 13:15:23 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.812 13:15:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:08.812 13:15:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.812 13:15:23 -- common/autotest_common.sh@10 -- # set +x 00:11:08.812 ************************************ 00:11:08.812 START TEST nvme_rpc_timeouts 00:11:08.812 ************************************ 00:11:08.812 13:15:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.812 * Looking for test storage... 00:11:08.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.812 13:15:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:08.812 13:15:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:08.812 13:15:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:09.071 13:15:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:09.071 13:15:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:09.071 13:15:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:09.071 13:15:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:09.071 13:15:23 -- scripts/common.sh@335 -- # IFS=.-: 00:11:09.071 13:15:23 -- scripts/common.sh@335 -- # read -ra ver1 00:11:09.071 13:15:23 -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.071 13:15:23 -- scripts/common.sh@336 -- # read -ra ver2 00:11:09.071 13:15:23 -- scripts/common.sh@337 -- # local 'op=<' 00:11:09.071 13:15:23 -- scripts/common.sh@339 -- # ver1_l=2 00:11:09.071 13:15:23 -- scripts/common.sh@340 -- # ver2_l=1 00:11:09.071 13:15:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:09.071 13:15:23 -- scripts/common.sh@343 -- # case "$op" in 00:11:09.071 13:15:23 -- scripts/common.sh@344 -- # : 1 00:11:09.071 13:15:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:09.071 13:15:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.071 13:15:23 -- scripts/common.sh@364 -- # decimal 1 00:11:09.071 13:15:23 -- scripts/common.sh@352 -- # local d=1 00:11:09.071 13:15:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.071 13:15:23 -- scripts/common.sh@354 -- # echo 1 00:11:09.071 13:15:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:09.071 13:15:23 -- scripts/common.sh@365 -- # decimal 2 00:11:09.071 13:15:23 -- scripts/common.sh@352 -- # local d=2 00:11:09.071 13:15:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.071 13:15:23 -- scripts/common.sh@354 -- # echo 2 00:11:09.071 13:15:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:09.071 13:15:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:09.071 13:15:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:09.071 13:15:23 -- scripts/common.sh@367 -- # return 0 00:11:09.071 13:15:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.071 13:15:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 13:15:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 13:15:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 13:15:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66463 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66463 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66494 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:09.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66494 00:11:09.071 13:15:23 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:09.071 13:15:23 -- common/autotest_common.sh@829 -- # '[' -z 66494 ']' 00:11:09.071 13:15:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.071 13:15:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.072 13:15:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.072 13:15:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.072 13:15:23 -- common/autotest_common.sh@10 -- # set +x 00:11:09.072 [2024-12-16 13:15:23.526562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:09.072 [2024-12-16 13:15:23.526685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66494 ] 00:11:09.330 [2024-12-16 13:15:23.673688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.331 [2024-12-16 13:15:23.812201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:09.331 [2024-12-16 13:15:23.812813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.331 [2024-12-16 13:15:23.812990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.896 13:15:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.896 13:15:24 -- common/autotest_common.sh@862 -- # return 0 00:11:09.896 13:15:24 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:09.896 Checking default timeout settings: 00:11:09.896 13:15:24 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:10.154 13:15:24 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:10.154 Making settings changes with rpc: 00:11:10.154 13:15:24 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:10.413 13:15:24 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:10.413 Check default vs. modified settings: 00:11:10.413 13:15:24 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66463 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66463 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:10.685 Setting action_on_timeout is changed as expected. 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66463 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66463 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.685 Setting timeout_us is changed as expected. 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66463 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66463 00:11:10.685 Setting timeout_admin_us is changed as expected. 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66463 /tmp/settings_modified_66463 00:11:10.685 13:15:25 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66494 00:11:10.685 13:15:25 -- common/autotest_common.sh@936 -- # '[' -z 66494 ']' 00:11:10.685 13:15:25 -- common/autotest_common.sh@940 -- # kill -0 66494 00:11:10.685 13:15:25 -- common/autotest_common.sh@941 -- # uname 00:11:10.685 13:15:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.685 13:15:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66494 00:11:10.685 killing process with pid 66494 00:11:10.685 13:15:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:10.685 13:15:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:10.685 13:15:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66494' 00:11:10.685 13:15:25 -- common/autotest_common.sh@955 -- # kill 66494 00:11:10.685 13:15:25 -- common/autotest_common.sh@960 -- # wait 66494 00:11:12.081 RPC TIMEOUT SETTING TEST PASSED. 00:11:12.081 13:15:26 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:12.081 ************************************ 00:11:12.081 END TEST nvme_rpc_timeouts 00:11:12.081 ************************************ 00:11:12.081 00:11:12.081 real 0m3.028s 00:11:12.081 user 0m5.739s 00:11:12.081 sys 0m0.464s 00:11:12.081 13:15:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:12.081 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:12.081 13:15:26 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:12.081 13:15:26 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:12.081 13:15:26 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:12.081 13:15:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.081 13:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.081 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:12.081 ************************************ 00:11:12.081 START TEST nvme_xnvme 00:11:12.081 ************************************ 00:11:12.081 13:15:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:12.081 * Looking for test storage... 00:11:12.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:12.081 13:15:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:12.081 13:15:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:12.081 13:15:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:12.081 13:15:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:12.081 13:15:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:12.081 13:15:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:12.081 13:15:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:12.081 13:15:26 -- scripts/common.sh@335 -- # IFS=.-: 00:11:12.081 13:15:26 -- scripts/common.sh@335 -- # read -ra ver1 00:11:12.081 13:15:26 -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.081 13:15:26 -- scripts/common.sh@336 -- # read -ra ver2 00:11:12.081 13:15:26 -- scripts/common.sh@337 -- # local 'op=<' 00:11:12.081 13:15:26 -- scripts/common.sh@339 -- # ver1_l=2 00:11:12.081 13:15:26 -- scripts/common.sh@340 -- # ver2_l=1 00:11:12.081 13:15:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:12.081 13:15:26 -- scripts/common.sh@343 -- # case "$op" in 00:11:12.081 13:15:26 -- scripts/common.sh@344 -- # : 1 00:11:12.081 13:15:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:12.081 13:15:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.081 13:15:26 -- scripts/common.sh@364 -- # decimal 1 00:11:12.081 13:15:26 -- scripts/common.sh@352 -- # local d=1 00:11:12.081 13:15:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.081 13:15:26 -- scripts/common.sh@354 -- # echo 1 00:11:12.081 13:15:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:12.081 13:15:26 -- scripts/common.sh@365 -- # decimal 2 00:11:12.081 13:15:26 -- scripts/common.sh@352 -- # local d=2 00:11:12.081 13:15:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.081 13:15:26 -- scripts/common.sh@354 -- # echo 2 00:11:12.081 13:15:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:12.081 13:15:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:12.081 13:15:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:12.081 13:15:26 -- scripts/common.sh@367 -- # return 0 00:11:12.081 13:15:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.081 13:15:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.081 --rc genhtml_branch_coverage=1 00:11:12.081 --rc genhtml_function_coverage=1 00:11:12.081 --rc genhtml_legend=1 00:11:12.081 --rc geninfo_all_blocks=1 00:11:12.081 --rc geninfo_unexecuted_blocks=1 00:11:12.081 00:11:12.081 ' 00:11:12.081 13:15:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.081 --rc genhtml_branch_coverage=1 00:11:12.081 --rc genhtml_function_coverage=1 00:11:12.081 --rc genhtml_legend=1 00:11:12.081 --rc geninfo_all_blocks=1 00:11:12.081 --rc geninfo_unexecuted_blocks=1 00:11:12.081 00:11:12.081 ' 00:11:12.081 13:15:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.082 --rc genhtml_branch_coverage=1 00:11:12.082 --rc genhtml_function_coverage=1 00:11:12.082 --rc genhtml_legend=1 00:11:12.082 --rc geninfo_all_blocks=1 00:11:12.082 --rc geninfo_unexecuted_blocks=1 00:11:12.082 00:11:12.082 ' 00:11:12.082 13:15:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:12.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.082 --rc genhtml_branch_coverage=1 00:11:12.082 --rc genhtml_function_coverage=1 00:11:12.082 --rc genhtml_legend=1 00:11:12.082 --rc geninfo_all_blocks=1 00:11:12.082 --rc geninfo_unexecuted_blocks=1 00:11:12.082 00:11:12.082 ' 00:11:12.082 13:15:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.082 13:15:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.082 13:15:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.082 13:15:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.082 13:15:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.082 13:15:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.082 13:15:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.082 13:15:26 -- paths/export.sh@5 -- # export PATH 00:11:12.082 13:15:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:12.082 13:15:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.082 13:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.082 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:12.082 ************************************ 00:11:12.082 START TEST xnvme_to_malloc_dd_copy 00:11:12.082 ************************************ 00:11:12.082 13:15:26 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:12.082 13:15:26 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:12.082 13:15:26 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:12.082 13:15:26 -- dd/common.sh@191 -- # return 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@18 -- # local io 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:12.082 13:15:26 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:12.082 13:15:26 -- dd/common.sh@31 -- # xtrace_disable 00:11:12.082 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:12.082 { 00:11:12.082 "subsystems": [ 00:11:12.082 { 00:11:12.082 "subsystem": "bdev", 00:11:12.082 "config": [ 00:11:12.082 { 00:11:12.082 "params": { 00:11:12.082 "block_size": 512, 00:11:12.082 "num_blocks": 2097152, 00:11:12.082 "name": "malloc0" 00:11:12.082 }, 00:11:12.082 "method": "bdev_malloc_create" 00:11:12.082 }, 00:11:12.082 { 00:11:12.082 "params": { 00:11:12.082 "io_mechanism": "libaio", 00:11:12.082 "filename": "/dev/nullb0", 00:11:12.082 "name": "null0" 00:11:12.082 }, 00:11:12.082 "method": "bdev_xnvme_create" 00:11:12.082 }, 00:11:12.082 { 00:11:12.082 "method": "bdev_wait_for_examine" 00:11:12.082 } 00:11:12.082 ] 00:11:12.082 } 00:11:12.082 ] 00:11:12.082 } 00:11:12.082 [2024-12-16 13:15:26.642386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:12.082 [2024-12-16 13:15:26.642599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66623 ] 00:11:12.342 [2024-12-16 13:15:26.790673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.600 [2024-12-16 13:15:26.929916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.512  [2024-12-16T13:15:30.022Z] Copying: 235/1024 [MB] (235 MBps) [2024-12-16T13:15:30.956Z] Copying: 495/1024 [MB] (259 MBps) [2024-12-16T13:15:31.523Z] Copying: 805/1024 [MB] (310 MBps) [2024-12-16T13:15:33.425Z] Copying: 1024/1024 [MB] (average 276 MBps) 00:11:18.851 00:11:18.851 13:15:33 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:18.851 13:15:33 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:18.851 13:15:33 -- dd/common.sh@31 -- # xtrace_disable 00:11:18.851 13:15:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.110 { 00:11:19.110 "subsystems": [ 00:11:19.110 { 00:11:19.110 "subsystem": "bdev", 00:11:19.110 "config": [ 00:11:19.110 { 00:11:19.110 "params": { 00:11:19.110 "block_size": 512, 00:11:19.110 "num_blocks": 2097152, 00:11:19.110 "name": "malloc0" 00:11:19.110 }, 00:11:19.110 "method": "bdev_malloc_create" 00:11:19.110 }, 00:11:19.110 { 00:11:19.110 "params": { 00:11:19.110 "io_mechanism": "libaio", 00:11:19.110 "filename": "/dev/nullb0", 00:11:19.110 "name": "null0" 00:11:19.110 }, 00:11:19.110 "method": "bdev_xnvme_create" 00:11:19.110 }, 00:11:19.110 { 00:11:19.110 "method": "bdev_wait_for_examine" 00:11:19.110 } 00:11:19.110 ] 00:11:19.110 } 00:11:19.110 ] 00:11:19.110 } 00:11:19.110 [2024-12-16 13:15:33.440268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:19.110 [2024-12-16 13:15:33.440456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66716 ] 00:11:19.110 [2024-12-16 13:15:33.582551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.368 [2024-12-16 13:15:33.722502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.273  [2024-12-16T13:15:36.786Z] Copying: 291/1024 [MB] (291 MBps) [2024-12-16T13:15:37.721Z] Copying: 552/1024 [MB] (261 MBps) [2024-12-16T13:15:38.316Z] Copying: 867/1024 [MB] (315 MBps) [2024-12-16T13:15:40.218Z] Copying: 1024/1024 [MB] (average 292 MBps) 00:11:25.644 00:11:25.644 13:15:39 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:25.644 13:15:39 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:25.644 13:15:39 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:25.644 13:15:39 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:25.644 13:15:39 -- dd/common.sh@31 -- # xtrace_disable 00:11:25.644 13:15:39 -- common/autotest_common.sh@10 -- # set +x 00:11:25.644 { 00:11:25.644 "subsystems": [ 00:11:25.644 { 00:11:25.644 "subsystem": "bdev", 00:11:25.644 "config": [ 00:11:25.644 { 00:11:25.644 "params": { 00:11:25.644 "block_size": 512, 00:11:25.644 "num_blocks": 2097152, 00:11:25.644 "name": "malloc0" 00:11:25.644 }, 00:11:25.644 "method": "bdev_malloc_create" 00:11:25.644 }, 00:11:25.644 { 00:11:25.644 "params": { 00:11:25.644 "io_mechanism": "io_uring", 00:11:25.644 "filename": "/dev/nullb0", 00:11:25.644 "name": "null0" 00:11:25.644 }, 00:11:25.644 "method": "bdev_xnvme_create" 00:11:25.644 }, 00:11:25.644 { 00:11:25.644 "method": "bdev_wait_for_examine" 00:11:25.644 } 00:11:25.644 ] 00:11:25.644 } 00:11:25.644 ] 00:11:25.644 } 00:11:25.644 [2024-12-16 13:15:40.006029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:25.644 [2024-12-16 13:15:40.006130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66800 ] 00:11:25.644 [2024-12-16 13:15:40.154036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.904 [2024-12-16 13:15:40.292962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.807  [2024-12-16T13:15:43.353Z] Copying: 322/1024 [MB] (322 MBps) [2024-12-16T13:15:44.287Z] Copying: 644/1024 [MB] (321 MBps) [2024-12-16T13:15:44.287Z] Copying: 966/1024 [MB] (322 MBps) [2024-12-16T13:15:46.202Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:11:31.628 00:11:31.628 13:15:46 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:31.628 13:15:46 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:31.628 13:15:46 -- dd/common.sh@31 -- # xtrace_disable 00:11:31.628 13:15:46 -- common/autotest_common.sh@10 -- # set +x 00:11:31.887 { 00:11:31.887 "subsystems": [ 00:11:31.887 { 00:11:31.887 "subsystem": "bdev", 00:11:31.887 "config": [ 00:11:31.887 { 00:11:31.887 "params": { 00:11:31.887 "block_size": 512, 00:11:31.887 "num_blocks": 2097152, 00:11:31.887 "name": "malloc0" 00:11:31.887 }, 00:11:31.887 "method": "bdev_malloc_create" 00:11:31.887 }, 00:11:31.887 { 00:11:31.887 "params": { 00:11:31.887 "io_mechanism": "io_uring", 00:11:31.887 "filename": "/dev/nullb0", 00:11:31.887 "name": "null0" 00:11:31.887 }, 00:11:31.887 "method": "bdev_xnvme_create" 00:11:31.887 }, 00:11:31.887 { 00:11:31.887 "method": "bdev_wait_for_examine" 00:11:31.887 } 00:11:31.887 ] 00:11:31.887 } 00:11:31.887 ] 00:11:31.887 } 00:11:31.887 [2024-12-16 13:15:46.245334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:31.887 [2024-12-16 13:15:46.245443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66877 ] 00:11:31.887 [2024-12-16 13:15:46.391111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.146 [2024-12-16 13:15:46.528545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.048  [2024-12-16T13:15:49.560Z] Copying: 328/1024 [MB] (328 MBps) [2024-12-16T13:15:50.496Z] Copying: 658/1024 [MB] (329 MBps) [2024-12-16T13:15:50.496Z] Copying: 986/1024 [MB] (328 MBps) [2024-12-16T13:15:52.399Z] Copying: 1024/1024 [MB] (average 328 MBps) 00:11:37.825 00:11:37.825 13:15:52 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:37.825 13:15:52 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:37.825 00:11:37.825 real 0m25.813s 00:11:37.825 user 0m22.788s 00:11:37.825 sys 0m2.499s 00:11:37.825 13:15:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:37.825 13:15:52 -- common/autotest_common.sh@10 -- # set +x 00:11:37.825 ************************************ 00:11:37.825 END TEST xnvme_to_malloc_dd_copy 00:11:37.825 ************************************ 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:38.086 13:15:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:38.086 13:15:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.086 13:15:52 -- common/autotest_common.sh@10 -- # set +x 00:11:38.086 ************************************ 00:11:38.086 START TEST xnvme_bdevperf 00:11:38.086 ************************************ 00:11:38.086 13:15:52 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:38.086 13:15:52 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:38.086 13:15:52 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:38.086 13:15:52 -- dd/common.sh@191 -- # return 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@60 -- # local io 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:38.086 13:15:52 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:38.087 13:15:52 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:38.087 13:15:52 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:38.087 13:15:52 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:38.087 13:15:52 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:38.087 13:15:52 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:38.087 13:15:52 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:38.087 13:15:52 -- dd/common.sh@31 -- # xtrace_disable 00:11:38.087 13:15:52 -- common/autotest_common.sh@10 -- # set +x 00:11:38.087 { 00:11:38.087 "subsystems": [ 00:11:38.087 { 00:11:38.087 "subsystem": "bdev", 00:11:38.087 "config": [ 00:11:38.087 { 00:11:38.087 "params": { 00:11:38.087 "io_mechanism": "libaio", 00:11:38.087 "filename": "/dev/nullb0", 00:11:38.087 "name": "null0" 00:11:38.087 }, 00:11:38.087 "method": "bdev_xnvme_create" 00:11:38.087 }, 00:11:38.087 { 00:11:38.087 "method": "bdev_wait_for_examine" 00:11:38.087 } 00:11:38.087 ] 00:11:38.087 } 00:11:38.087 ] 00:11:38.087 } 00:11:38.087 [2024-12-16 13:15:52.517604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.087 [2024-12-16 13:15:52.517729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66971 ] 00:11:38.347 [2024-12-16 13:15:52.671647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.347 [2024-12-16 13:15:52.892495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.607 Running I/O for 5 seconds... 00:11:43.883 00:11:43.883 Latency(us) 00:11:43.883 [2024-12-16T13:15:58.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.883 [2024-12-16T13:15:58.457Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:43.883 null0 : 5.00 192030.92 750.12 0.00 0.00 330.90 111.85 667.96 00:11:43.883 [2024-12-16T13:15:58.457Z] =================================================================================================================== 00:11:43.883 [2024-12-16T13:15:58.457Z] Total : 192030.92 750.12 0.00 0.00 330.90 111.85 667.96 00:11:44.450 13:15:58 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:44.450 13:15:58 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:44.450 13:15:58 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:44.450 13:15:58 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:44.450 13:15:58 -- dd/common.sh@31 -- # xtrace_disable 00:11:44.450 13:15:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.450 { 00:11:44.450 "subsystems": [ 00:11:44.450 { 00:11:44.450 "subsystem": "bdev", 00:11:44.450 "config": [ 00:11:44.450 { 00:11:44.450 "params": { 00:11:44.450 "io_mechanism": "io_uring", 00:11:44.450 "filename": "/dev/nullb0", 00:11:44.450 "name": "null0" 00:11:44.450 }, 00:11:44.450 "method": "bdev_xnvme_create" 00:11:44.450 }, 00:11:44.450 { 00:11:44.450 "method": "bdev_wait_for_examine" 00:11:44.450 } 00:11:44.450 ] 00:11:44.450 } 00:11:44.450 ] 00:11:44.450 } 00:11:44.450 [2024-12-16 13:15:58.875308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:44.450 [2024-12-16 13:15:58.875417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67045 ] 00:11:44.450 [2024-12-16 13:15:59.021573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.709 [2024-12-16 13:15:59.169850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.969 Running I/O for 5 seconds... 00:11:50.233 00:11:50.233 Latency(us) 00:11:50.233 [2024-12-16T13:16:04.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.233 [2024-12-16T13:16:04.807Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:50.233 null0 : 5.00 238283.43 930.79 0.00 0.00 266.46 155.96 1064.96 00:11:50.233 [2024-12-16T13:16:04.807Z] =================================================================================================================== 00:11:50.233 [2024-12-16T13:16:04.807Z] Total : 238283.43 930.79 0.00 0.00 266.46 155.96 1064.96 00:11:50.493 13:16:04 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:50.493 13:16:04 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:50.493 00:11:50.493 real 0m12.578s 00:11:50.493 user 0m10.097s 00:11:50.493 sys 0m2.237s 00:11:50.493 13:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:50.493 ************************************ 00:11:50.493 END TEST xnvme_bdevperf 00:11:50.493 ************************************ 00:11:50.493 13:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:50.493 ************************************ 00:11:50.494 END TEST nvme_xnvme 00:11:50.494 ************************************ 00:11:50.494 00:11:50.494 real 0m38.650s 00:11:50.494 user 0m32.999s 00:11:50.494 sys 0m4.846s 00:11:50.494 13:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:50.494 13:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:50.755 13:16:05 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:50.755 13:16:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:50.755 13:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:50.755 13:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:50.755 ************************************ 00:11:50.755 START TEST blockdev_xnvme 00:11:50.755 ************************************ 00:11:50.755 13:16:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:50.755 * Looking for test storage... 00:11:50.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:50.755 13:16:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:50.755 13:16:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:50.755 13:16:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:50.755 13:16:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:50.755 13:16:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:50.755 13:16:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:50.755 13:16:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:50.755 13:16:05 -- scripts/common.sh@335 -- # IFS=.-: 00:11:50.755 13:16:05 -- scripts/common.sh@335 -- # read -ra ver1 00:11:50.755 13:16:05 -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.755 13:16:05 -- scripts/common.sh@336 -- # read -ra ver2 00:11:50.755 13:16:05 -- scripts/common.sh@337 -- # local 'op=<' 00:11:50.755 13:16:05 -- scripts/common.sh@339 -- # ver1_l=2 00:11:50.755 13:16:05 -- scripts/common.sh@340 -- # ver2_l=1 00:11:50.755 13:16:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:50.755 13:16:05 -- scripts/common.sh@343 -- # case "$op" in 00:11:50.755 13:16:05 -- scripts/common.sh@344 -- # : 1 00:11:50.755 13:16:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:50.755 13:16:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.755 13:16:05 -- scripts/common.sh@364 -- # decimal 1 00:11:50.755 13:16:05 -- scripts/common.sh@352 -- # local d=1 00:11:50.755 13:16:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.755 13:16:05 -- scripts/common.sh@354 -- # echo 1 00:11:50.755 13:16:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:50.755 13:16:05 -- scripts/common.sh@365 -- # decimal 2 00:11:50.755 13:16:05 -- scripts/common.sh@352 -- # local d=2 00:11:50.755 13:16:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.755 13:16:05 -- scripts/common.sh@354 -- # echo 2 00:11:50.755 13:16:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:50.755 13:16:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:50.755 13:16:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:50.755 13:16:05 -- scripts/common.sh@367 -- # return 0 00:11:50.755 13:16:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.755 13:16:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.755 --rc genhtml_branch_coverage=1 00:11:50.755 --rc genhtml_function_coverage=1 00:11:50.755 --rc genhtml_legend=1 00:11:50.755 --rc geninfo_all_blocks=1 00:11:50.755 --rc geninfo_unexecuted_blocks=1 00:11:50.755 00:11:50.755 ' 00:11:50.755 13:16:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.755 --rc genhtml_branch_coverage=1 00:11:50.755 --rc genhtml_function_coverage=1 00:11:50.755 --rc genhtml_legend=1 00:11:50.755 --rc geninfo_all_blocks=1 00:11:50.755 --rc geninfo_unexecuted_blocks=1 00:11:50.755 00:11:50.755 ' 00:11:50.755 13:16:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.755 --rc genhtml_branch_coverage=1 00:11:50.755 --rc genhtml_function_coverage=1 00:11:50.755 --rc genhtml_legend=1 00:11:50.755 --rc geninfo_all_blocks=1 00:11:50.755 --rc geninfo_unexecuted_blocks=1 00:11:50.755 00:11:50.755 ' 00:11:50.755 13:16:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.755 --rc genhtml_branch_coverage=1 00:11:50.755 --rc genhtml_function_coverage=1 00:11:50.755 --rc genhtml_legend=1 00:11:50.755 --rc geninfo_all_blocks=1 00:11:50.755 --rc geninfo_unexecuted_blocks=1 00:11:50.755 00:11:50.755 ' 00:11:50.755 13:16:05 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:50.755 13:16:05 -- bdev/nbd_common.sh@6 -- # set -e 00:11:50.755 13:16:05 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:50.755 13:16:05 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:50.755 13:16:05 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:50.755 13:16:05 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:50.755 13:16:05 -- bdev/blockdev.sh@18 -- # : 00:11:50.755 13:16:05 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:50.755 13:16:05 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:50.755 13:16:05 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:50.755 13:16:05 -- bdev/blockdev.sh@672 -- # uname -s 00:11:50.755 13:16:05 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:50.755 13:16:05 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:50.755 13:16:05 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:50.755 13:16:05 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:50.755 13:16:05 -- bdev/blockdev.sh@682 -- # dek= 00:11:50.755 13:16:05 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:50.755 13:16:05 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:50.755 13:16:05 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:50.755 13:16:05 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:50.755 13:16:05 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:50.755 13:16:05 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:50.755 13:16:05 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67186 00:11:50.755 13:16:05 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:50.755 13:16:05 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:50.755 13:16:05 -- bdev/blockdev.sh@47 -- # waitforlisten 67186 00:11:50.755 13:16:05 -- common/autotest_common.sh@829 -- # '[' -z 67186 ']' 00:11:50.755 13:16:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.755 13:16:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.755 13:16:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.755 13:16:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.755 13:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.016 [2024-12-16 13:16:05.334711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:51.016 [2024-12-16 13:16:05.335004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67186 ] 00:11:51.016 [2024-12-16 13:16:05.488896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.278 [2024-12-16 13:16:05.707477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:51.278 [2024-12-16 13:16:05.707969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.666 13:16:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.666 13:16:06 -- common/autotest_common.sh@862 -- # return 0 00:11:52.666 13:16:06 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:52.666 13:16:06 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:52.666 13:16:06 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:52.666 13:16:06 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:52.666 13:16:06 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:52.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:52.927 Waiting for block devices as requested 00:11:52.927 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.188 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.188 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.188 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.461 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:58.461 13:16:12 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:58.461 13:16:12 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:58.461 13:16:12 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:58.461 13:16:12 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:58.461 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.461 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:58.461 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:58.461 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:58.461 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.461 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.461 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:58.461 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:58.461 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:58.461 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.461 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.462 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.462 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:58.462 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:58.462 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.462 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:58.462 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:58.462 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.462 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:58.462 13:16:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:58.462 13:16:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:58.462 13:16:12 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:58.462 13:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 13:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 13:16:12 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:58.462 nvme0n1 00:11:58.462 nvme1n1 00:11:58.462 nvme1n2 00:11:58.462 nvme1n3 00:11:58.462 nvme2n1 00:11:58.462 nvme3n1 00:11:58.462 13:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:58.462 13:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 13:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 13:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@738 -- # cat 00:11:58.462 13:16:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:58.462 13:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 13:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 13:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:58.462 13:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 13:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 13:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:58.462 13:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 13:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 13:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:58.462 13:16:12 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:58.462 13:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 13:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 13:16:12 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:58.462 13:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.462 13:16:12 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:58.462 13:16:12 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4b2cd5c3-e402-42b1-ba2f-6cb89f09614f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4b2cd5c3-e402-42b1-ba2f-6cb89f09614f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "aa6adfe5-4a94-49b2-91f2-1e3c5e050003"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa6adfe5-4a94-49b2-91f2-1e3c5e050003",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "e9551fb2-da2f-455e-851f-7267814389b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9551fb2-da2f-455e-851f-7267814389b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "8c925fce-31c2-4efc-9b60-7295d34ebab7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c925fce-31c2-4efc-9b60-7295d34ebab7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "13fbe4dc-5bf5-4546-8517-38a43e678bb0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "13fbe4dc-5bf5-4546-8517-38a43e678bb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0f352e12-9c98-4a8d-860d-f9fa0d9e6a17"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0f352e12-9c98-4a8d-860d-f9fa0d9e6a17",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:58.462 13:16:12 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:58.462 13:16:12 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:58.462 13:16:12 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:58.462 13:16:12 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:58.462 13:16:12 -- bdev/blockdev.sh@752 -- # killprocess 67186 00:11:58.462 13:16:12 -- common/autotest_common.sh@936 -- # '[' -z 67186 ']' 00:11:58.462 13:16:12 -- common/autotest_common.sh@940 -- # kill -0 67186 00:11:58.462 13:16:12 -- common/autotest_common.sh@941 -- # uname 00:11:58.463 13:16:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.463 13:16:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67186 00:11:58.463 killing process with pid 67186 00:11:58.463 13:16:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:58.463 13:16:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:58.463 13:16:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67186' 00:11:58.463 13:16:13 -- common/autotest_common.sh@955 -- # kill 67186 00:11:58.463 13:16:13 -- common/autotest_common.sh@960 -- # wait 67186 00:11:59.871 13:16:14 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:59.871 13:16:14 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:59.871 13:16:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:59.871 13:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.871 13:16:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.871 ************************************ 00:11:59.871 START TEST bdev_hello_world 00:11:59.871 ************************************ 00:11:59.871 13:16:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:59.871 [2024-12-16 13:16:14.255230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:59.871 [2024-12-16 13:16:14.255346] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67580 ] 00:11:59.871 [2024-12-16 13:16:14.405280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.133 [2024-12-16 13:16:14.629891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.705 [2024-12-16 13:16:15.001987] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:00.705 [2024-12-16 13:16:15.002054] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:00.705 [2024-12-16 13:16:15.002073] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:00.705 [2024-12-16 13:16:15.004147] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:00.705 [2024-12-16 13:16:15.004774] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:00.705 [2024-12-16 13:16:15.004801] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:00.705 [2024-12-16 13:16:15.005413] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:00.705 00:12:00.705 [2024-12-16 13:16:15.005598] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:01.275 00:12:01.275 real 0m1.533s 00:12:01.275 user 0m1.202s 00:12:01.275 sys 0m0.205s 00:12:01.275 13:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:01.275 ************************************ 00:12:01.275 END TEST bdev_hello_world 00:12:01.275 ************************************ 00:12:01.275 13:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 13:16:15 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:01.275 13:16:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.275 13:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.275 13:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 ************************************ 00:12:01.275 START TEST bdev_bounds 00:12:01.275 ************************************ 00:12:01.275 Process bdevio pid: 67617 00:12:01.275 13:16:15 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:01.275 13:16:15 -- bdev/blockdev.sh@288 -- # bdevio_pid=67617 00:12:01.275 13:16:15 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:01.275 13:16:15 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67617' 00:12:01.275 13:16:15 -- bdev/blockdev.sh@291 -- # waitforlisten 67617 00:12:01.275 13:16:15 -- common/autotest_common.sh@829 -- # '[' -z 67617 ']' 00:12:01.275 13:16:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.275 13:16:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.275 13:16:15 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:01.275 13:16:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.275 13:16:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.275 13:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:01.534 [2024-12-16 13:16:15.853731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:01.534 [2024-12-16 13:16:15.853854] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67617 ] 00:12:01.534 [2024-12-16 13:16:16.002987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:01.792 [2024-12-16 13:16:16.157227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.792 [2024-12-16 13:16:16.157266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.792 [2024-12-16 13:16:16.157287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.360 13:16:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.360 13:16:16 -- common/autotest_common.sh@862 -- # return 0 00:12:02.360 13:16:16 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:02.360 I/O targets: 00:12:02.360 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:02.360 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:02.360 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:02.360 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:02.360 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:02.360 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:02.360 00:12:02.360 00:12:02.360 CUnit - A unit testing framework for C - Version 2.1-3 00:12:02.360 http://cunit.sourceforge.net/ 00:12:02.360 00:12:02.360 00:12:02.360 Suite: bdevio tests on: nvme3n1 00:12:02.360 Test: blockdev write read block ...passed 00:12:02.360 Test: blockdev write zeroes read block ...passed 00:12:02.360 Test: blockdev write zeroes read no split ...passed 00:12:02.360 Test: blockdev write zeroes read split ...passed 00:12:02.360 Test: blockdev write zeroes read split partial ...passed 00:12:02.360 Test: blockdev reset ...passed 00:12:02.360 Test: blockdev write read 8 blocks ...passed 00:12:02.360 Test: blockdev write read size > 128k ...passed 00:12:02.360 Test: blockdev write read invalid size ...passed 00:12:02.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.360 Test: blockdev write read max offset ...passed 00:12:02.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.360 Test: blockdev writev readv 8 blocks ...passed 00:12:02.360 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.360 Test: blockdev writev readv block ...passed 00:12:02.360 Test: blockdev writev readv size > 128k ...passed 00:12:02.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.360 Test: blockdev comparev and writev ...passed 00:12:02.360 Test: blockdev nvme passthru rw ...passed 00:12:02.360 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.360 Test: blockdev nvme admin passthru ...passed 00:12:02.360 Test: blockdev copy ...passed 00:12:02.360 Suite: bdevio tests on: nvme2n1 00:12:02.360 Test: blockdev write read block ...passed 00:12:02.360 Test: blockdev write zeroes read block ...passed 00:12:02.360 Test: blockdev write zeroes read no split ...passed 00:12:02.360 Test: blockdev write zeroes read split ...passed 00:12:02.360 Test: blockdev write zeroes read split partial ...passed 00:12:02.360 Test: blockdev reset ...passed 00:12:02.360 Test: blockdev write read 8 blocks ...passed 00:12:02.360 Test: blockdev write read size > 128k ...passed 00:12:02.360 Test: blockdev write read invalid size ...passed 00:12:02.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.360 Test: blockdev write read max offset ...passed 00:12:02.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.360 Test: blockdev writev readv 8 blocks ...passed 00:12:02.360 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.360 Test: blockdev writev readv block ...passed 00:12:02.360 Test: blockdev writev readv size > 128k ...passed 00:12:02.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.360 Test: blockdev comparev and writev ...passed 00:12:02.360 Test: blockdev nvme passthru rw ...passed 00:12:02.360 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.360 Test: blockdev nvme admin passthru ...passed 00:12:02.360 Test: blockdev copy ...passed 00:12:02.360 Suite: bdevio tests on: nvme1n3 00:12:02.360 Test: blockdev write read block ...passed 00:12:02.360 Test: blockdev write zeroes read block ...passed 00:12:02.360 Test: blockdev write zeroes read no split ...passed 00:12:02.360 Test: blockdev write zeroes read split ...passed 00:12:02.360 Test: blockdev write zeroes read split partial ...passed 00:12:02.360 Test: blockdev reset ...passed 00:12:02.360 Test: blockdev write read 8 blocks ...passed 00:12:02.360 Test: blockdev write read size > 128k ...passed 00:12:02.360 Test: blockdev write read invalid size ...passed 00:12:02.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.360 Test: blockdev write read max offset ...passed 00:12:02.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.360 Test: blockdev writev readv 8 blocks ...passed 00:12:02.360 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.360 Test: blockdev writev readv block ...passed 00:12:02.360 Test: blockdev writev readv size > 128k ...passed 00:12:02.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.360 Test: blockdev comparev and writev ...passed 00:12:02.360 Test: blockdev nvme passthru rw ...passed 00:12:02.360 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.361 Test: blockdev nvme admin passthru ...passed 00:12:02.361 Test: blockdev copy ...passed 00:12:02.361 Suite: bdevio tests on: nvme1n2 00:12:02.361 Test: blockdev write read block ...passed 00:12:02.361 Test: blockdev write zeroes read block ...passed 00:12:02.361 Test: blockdev write zeroes read no split ...passed 00:12:02.628 Test: blockdev write zeroes read split ...passed 00:12:02.628 Test: blockdev write zeroes read split partial ...passed 00:12:02.628 Test: blockdev reset ...passed 00:12:02.628 Test: blockdev write read 8 blocks ...passed 00:12:02.628 Test: blockdev write read size > 128k ...passed 00:12:02.628 Test: blockdev write read invalid size ...passed 00:12:02.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.628 Test: blockdev write read max offset ...passed 00:12:02.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.628 Test: blockdev writev readv 8 blocks ...passed 00:12:02.628 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.628 Test: blockdev writev readv block ...passed 00:12:02.628 Test: blockdev writev readv size > 128k ...passed 00:12:02.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.628 Test: blockdev comparev and writev ...passed 00:12:02.628 Test: blockdev nvme passthru rw ...passed 00:12:02.628 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.628 Test: blockdev nvme admin passthru ...passed 00:12:02.628 Test: blockdev copy ...passed 00:12:02.628 Suite: bdevio tests on: nvme1n1 00:12:02.628 Test: blockdev write read block ...passed 00:12:02.628 Test: blockdev write zeroes read block ...passed 00:12:02.628 Test: blockdev write zeroes read no split ...passed 00:12:02.628 Test: blockdev write zeroes read split ...passed 00:12:02.628 Test: blockdev write zeroes read split partial ...passed 00:12:02.628 Test: blockdev reset ...passed 00:12:02.628 Test: blockdev write read 8 blocks ...passed 00:12:02.628 Test: blockdev write read size > 128k ...passed 00:12:02.628 Test: blockdev write read invalid size ...passed 00:12:02.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.628 Test: blockdev write read max offset ...passed 00:12:02.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.628 Test: blockdev writev readv 8 blocks ...passed 00:12:02.628 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.628 Test: blockdev writev readv block ...passed 00:12:02.628 Test: blockdev writev readv size > 128k ...passed 00:12:02.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.628 Test: blockdev comparev and writev ...passed 00:12:02.628 Test: blockdev nvme passthru rw ...passed 00:12:02.628 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.628 Test: blockdev nvme admin passthru ...passed 00:12:02.628 Test: blockdev copy ...passed 00:12:02.628 Suite: bdevio tests on: nvme0n1 00:12:02.628 Test: blockdev write read block ...passed 00:12:02.628 Test: blockdev write zeroes read block ...passed 00:12:02.628 Test: blockdev write zeroes read no split ...passed 00:12:02.628 Test: blockdev write zeroes read split ...passed 00:12:02.628 Test: blockdev write zeroes read split partial ...passed 00:12:02.628 Test: blockdev reset ...passed 00:12:02.628 Test: blockdev write read 8 blocks ...passed 00:12:02.628 Test: blockdev write read size > 128k ...passed 00:12:02.628 Test: blockdev write read invalid size ...passed 00:12:02.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.628 Test: blockdev write read max offset ...passed 00:12:02.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.628 Test: blockdev writev readv 8 blocks ...passed 00:12:02.628 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.628 Test: blockdev writev readv block ...passed 00:12:02.628 Test: blockdev writev readv size > 128k ...passed 00:12:02.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.628 Test: blockdev comparev and writev ...passed 00:12:02.628 Test: blockdev nvme passthru rw ...passed 00:12:02.628 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.628 Test: blockdev nvme admin passthru ...passed 00:12:02.628 Test: blockdev copy ...passed 00:12:02.628 00:12:02.628 Run Summary: Type Total Ran Passed Failed Inactive 00:12:02.628 suites 6 6 n/a 0 0 00:12:02.628 tests 138 138 138 0 0 00:12:02.628 asserts 780 780 780 0 n/a 00:12:02.628 00:12:02.628 Elapsed time = 0.854 seconds 00:12:02.628 0 00:12:02.628 13:16:17 -- bdev/blockdev.sh@293 -- # killprocess 67617 00:12:02.628 13:16:17 -- common/autotest_common.sh@936 -- # '[' -z 67617 ']' 00:12:02.628 13:16:17 -- common/autotest_common.sh@940 -- # kill -0 67617 00:12:02.628 13:16:17 -- common/autotest_common.sh@941 -- # uname 00:12:02.628 13:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.629 13:16:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67617 00:12:02.629 13:16:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:02.629 13:16:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:02.629 13:16:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67617' 00:12:02.629 killing process with pid 67617 00:12:02.629 13:16:17 -- common/autotest_common.sh@955 -- # kill 67617 00:12:02.629 13:16:17 -- common/autotest_common.sh@960 -- # wait 67617 00:12:03.199 13:16:17 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:03.199 00:12:03.199 real 0m1.938s 00:12:03.199 user 0m4.608s 00:12:03.199 sys 0m0.281s 00:12:03.199 13:16:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:03.199 ************************************ 00:12:03.199 END TEST bdev_bounds 00:12:03.199 ************************************ 00:12:03.199 13:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:03.459 13:16:17 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:03.459 13:16:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:03.459 13:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.459 13:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:03.459 ************************************ 00:12:03.459 START TEST bdev_nbd 00:12:03.459 ************************************ 00:12:03.459 13:16:17 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:03.459 13:16:17 -- bdev/blockdev.sh@298 -- # uname -s 00:12:03.459 13:16:17 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:03.459 13:16:17 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.459 13:16:17 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:03.459 13:16:17 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:03.459 13:16:17 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:03.459 13:16:17 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:03.459 13:16:17 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:03.459 13:16:17 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:03.459 13:16:17 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:03.459 13:16:17 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:03.459 13:16:17 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:03.459 13:16:17 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:03.459 13:16:17 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:03.459 13:16:17 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:03.459 13:16:17 -- bdev/blockdev.sh@316 -- # nbd_pid=67671 00:12:03.459 13:16:17 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:03.459 13:16:17 -- bdev/blockdev.sh@318 -- # waitforlisten 67671 /var/tmp/spdk-nbd.sock 00:12:03.459 13:16:17 -- common/autotest_common.sh@829 -- # '[' -z 67671 ']' 00:12:03.459 13:16:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:03.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:03.459 13:16:17 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:03.459 13:16:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.459 13:16:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:03.459 13:16:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.459 13:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:03.459 [2024-12-16 13:16:17.867530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:03.459 [2024-12-16 13:16:17.867656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.459 [2024-12-16 13:16:18.018657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.718 [2024-12-16 13:16:18.168431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.285 13:16:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.285 13:16:18 -- common/autotest_common.sh@862 -- # return 0 00:12:04.285 13:16:18 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@24 -- # local i 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.285 13:16:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:04.544 13:16:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:04.544 13:16:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:04.544 13:16:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:04.544 13:16:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:04.544 13:16:18 -- common/autotest_common.sh@867 -- # local i 00:12:04.544 13:16:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.544 13:16:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.544 13:16:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:04.544 13:16:18 -- common/autotest_common.sh@871 -- # break 00:12:04.544 13:16:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.544 13:16:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.544 13:16:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.544 1+0 records in 00:12:04.544 1+0 records out 00:12:04.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548787 s, 7.5 MB/s 00:12:04.544 13:16:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.544 13:16:18 -- common/autotest_common.sh@884 -- # size=4096 00:12:04.544 13:16:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.544 13:16:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.544 13:16:18 -- common/autotest_common.sh@887 -- # return 0 00:12:04.544 13:16:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.544 13:16:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.544 13:16:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:04.805 13:16:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:04.805 13:16:19 -- common/autotest_common.sh@867 -- # local i 00:12:04.805 13:16:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:04.805 13:16:19 -- common/autotest_common.sh@871 -- # break 00:12:04.805 13:16:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.805 1+0 records in 00:12:04.805 1+0 records out 00:12:04.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469813 s, 8.7 MB/s 00:12:04.805 13:16:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.805 13:16:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:04.805 13:16:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.805 13:16:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.805 13:16:19 -- common/autotest_common.sh@887 -- # return 0 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:04.805 13:16:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:04.805 13:16:19 -- common/autotest_common.sh@867 -- # local i 00:12:04.805 13:16:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:04.805 13:16:19 -- common/autotest_common.sh@871 -- # break 00:12:04.805 13:16:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.805 13:16:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.805 1+0 records in 00:12:04.805 1+0 records out 00:12:04.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108207 s, 3.8 MB/s 00:12:04.805 13:16:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.805 13:16:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:04.805 13:16:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.805 13:16:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.805 13:16:19 -- common/autotest_common.sh@887 -- # return 0 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.805 13:16:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:05.067 13:16:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:05.067 13:16:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:05.067 13:16:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:05.067 13:16:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:05.067 13:16:19 -- common/autotest_common.sh@867 -- # local i 00:12:05.067 13:16:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.067 13:16:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.067 13:16:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:05.067 13:16:19 -- common/autotest_common.sh@871 -- # break 00:12:05.067 13:16:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.067 13:16:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.067 13:16:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.067 1+0 records in 00:12:05.067 1+0 records out 00:12:05.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107277 s, 3.8 MB/s 00:12:05.067 13:16:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.067 13:16:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.067 13:16:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.067 13:16:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.067 13:16:19 -- common/autotest_common.sh@887 -- # return 0 00:12:05.067 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.067 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.067 13:16:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:05.330 13:16:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:05.330 13:16:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:05.330 13:16:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:05.330 13:16:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:05.330 13:16:19 -- common/autotest_common.sh@867 -- # local i 00:12:05.330 13:16:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.330 13:16:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.330 13:16:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:05.330 13:16:19 -- common/autotest_common.sh@871 -- # break 00:12:05.330 13:16:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.330 13:16:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.330 13:16:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.330 1+0 records in 00:12:05.330 1+0 records out 00:12:05.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116147 s, 3.5 MB/s 00:12:05.330 13:16:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.330 13:16:19 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.330 13:16:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.330 13:16:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.330 13:16:19 -- common/autotest_common.sh@887 -- # return 0 00:12:05.330 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.330 13:16:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.330 13:16:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:05.591 13:16:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:05.591 13:16:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:05.591 13:16:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:05.591 13:16:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:05.591 13:16:20 -- common/autotest_common.sh@867 -- # local i 00:12:05.591 13:16:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.591 13:16:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.591 13:16:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:05.592 13:16:20 -- common/autotest_common.sh@871 -- # break 00:12:05.592 13:16:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.592 13:16:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.592 13:16:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.592 1+0 records in 00:12:05.592 1+0 records out 00:12:05.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00169713 s, 2.4 MB/s 00:12:05.592 13:16:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.592 13:16:20 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.592 13:16:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.592 13:16:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.592 13:16:20 -- common/autotest_common.sh@887 -- # return 0 00:12:05.592 13:16:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.592 13:16:20 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.592 13:16:20 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd0", 00:12:05.854 "bdev_name": "nvme0n1" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd1", 00:12:05.854 "bdev_name": "nvme1n1" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd2", 00:12:05.854 "bdev_name": "nvme1n2" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd3", 00:12:05.854 "bdev_name": "nvme1n3" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd4", 00:12:05.854 "bdev_name": "nvme2n1" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd5", 00:12:05.854 "bdev_name": "nvme3n1" 00:12:05.854 } 00:12:05.854 ]' 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd0", 00:12:05.854 "bdev_name": "nvme0n1" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd1", 00:12:05.854 "bdev_name": "nvme1n1" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd2", 00:12:05.854 "bdev_name": "nvme1n2" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd3", 00:12:05.854 "bdev_name": "nvme1n3" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd4", 00:12:05.854 "bdev_name": "nvme2n1" 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "nbd_device": "/dev/nbd5", 00:12:05.854 "bdev_name": "nvme3n1" 00:12:05.854 } 00:12:05.854 ]' 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@51 -- # local i 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.854 13:16:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@41 -- # break 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.116 13:16:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@41 -- # break 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@41 -- # break 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.376 13:16:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:06.634 13:16:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:06.634 13:16:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:06.634 13:16:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:06.634 13:16:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.635 13:16:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.635 13:16:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:06.635 13:16:21 -- bdev/nbd_common.sh@41 -- # break 00:12:06.635 13:16:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.635 13:16:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.635 13:16:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@41 -- # break 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.893 13:16:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@41 -- # break 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:07.152 13:16:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@65 -- # true 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@65 -- # count=0 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@122 -- # count=0 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@127 -- # return 0 00:12:07.413 13:16:21 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@12 -- # local i 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:07.413 /dev/nbd0 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.413 13:16:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.413 13:16:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:07.413 13:16:21 -- common/autotest_common.sh@867 -- # local i 00:12:07.414 13:16:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.414 13:16:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.414 13:16:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:07.414 13:16:21 -- common/autotest_common.sh@871 -- # break 00:12:07.414 13:16:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.414 13:16:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.414 13:16:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.414 1+0 records in 00:12:07.414 1+0 records out 00:12:07.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631481 s, 6.5 MB/s 00:12:07.414 13:16:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.414 13:16:21 -- common/autotest_common.sh@884 -- # size=4096 00:12:07.414 13:16:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.414 13:16:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.414 13:16:21 -- common/autotest_common.sh@887 -- # return 0 00:12:07.414 13:16:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.414 13:16:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.414 13:16:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:07.675 /dev/nbd1 00:12:07.675 13:16:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:07.675 13:16:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:07.675 13:16:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:07.675 13:16:22 -- common/autotest_common.sh@867 -- # local i 00:12:07.675 13:16:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.675 13:16:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.675 13:16:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:07.675 13:16:22 -- common/autotest_common.sh@871 -- # break 00:12:07.675 13:16:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.675 13:16:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.675 13:16:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.675 1+0 records in 00:12:07.675 1+0 records out 00:12:07.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001121 s, 3.7 MB/s 00:12:07.675 13:16:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.675 13:16:22 -- common/autotest_common.sh@884 -- # size=4096 00:12:07.675 13:16:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.675 13:16:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.675 13:16:22 -- common/autotest_common.sh@887 -- # return 0 00:12:07.675 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.675 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.675 13:16:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:07.935 /dev/nbd10 00:12:07.935 13:16:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:07.935 13:16:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:07.935 13:16:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:07.935 13:16:22 -- common/autotest_common.sh@867 -- # local i 00:12:07.935 13:16:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.935 13:16:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.935 13:16:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:07.935 13:16:22 -- common/autotest_common.sh@871 -- # break 00:12:07.935 13:16:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.935 13:16:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.935 13:16:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.935 1+0 records in 00:12:07.935 1+0 records out 00:12:07.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000999229 s, 4.1 MB/s 00:12:07.935 13:16:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.935 13:16:22 -- common/autotest_common.sh@884 -- # size=4096 00:12:07.935 13:16:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.935 13:16:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.935 13:16:22 -- common/autotest_common.sh@887 -- # return 0 00:12:07.935 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.935 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.935 13:16:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:08.196 /dev/nbd11 00:12:08.196 13:16:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:08.196 13:16:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:08.196 13:16:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:08.196 13:16:22 -- common/autotest_common.sh@867 -- # local i 00:12:08.196 13:16:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.196 13:16:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.196 13:16:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:08.196 13:16:22 -- common/autotest_common.sh@871 -- # break 00:12:08.196 13:16:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.196 13:16:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.196 13:16:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.196 1+0 records in 00:12:08.196 1+0 records out 00:12:08.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491227 s, 8.3 MB/s 00:12:08.196 13:16:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.196 13:16:22 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.196 13:16:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.196 13:16:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.196 13:16:22 -- common/autotest_common.sh@887 -- # return 0 00:12:08.197 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.197 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.197 13:16:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:08.458 /dev/nbd12 00:12:08.458 13:16:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:08.458 13:16:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:08.458 13:16:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:08.458 13:16:22 -- common/autotest_common.sh@867 -- # local i 00:12:08.458 13:16:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.458 13:16:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.458 13:16:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:08.458 13:16:22 -- common/autotest_common.sh@871 -- # break 00:12:08.458 13:16:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.458 13:16:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.458 13:16:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.458 1+0 records in 00:12:08.458 1+0 records out 00:12:08.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142788 s, 2.9 MB/s 00:12:08.458 13:16:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.458 13:16:22 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.458 13:16:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.458 13:16:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.458 13:16:22 -- common/autotest_common.sh@887 -- # return 0 00:12:08.458 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.458 13:16:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.458 13:16:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:08.718 /dev/nbd13 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:08.718 13:16:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:08.718 13:16:23 -- common/autotest_common.sh@867 -- # local i 00:12:08.718 13:16:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.718 13:16:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.718 13:16:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:08.718 13:16:23 -- common/autotest_common.sh@871 -- # break 00:12:08.718 13:16:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.718 13:16:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.718 13:16:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.718 1+0 records in 00:12:08.718 1+0 records out 00:12:08.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741969 s, 5.5 MB/s 00:12:08.718 13:16:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.718 13:16:23 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.718 13:16:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.718 13:16:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.718 13:16:23 -- common/autotest_common.sh@887 -- # return 0 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:08.718 13:16:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:08.718 { 00:12:08.718 "nbd_device": "/dev/nbd0", 00:12:08.718 "bdev_name": "nvme0n1" 00:12:08.718 }, 00:12:08.718 { 00:12:08.718 "nbd_device": "/dev/nbd1", 00:12:08.718 "bdev_name": "nvme1n1" 00:12:08.718 }, 00:12:08.718 { 00:12:08.718 "nbd_device": "/dev/nbd10", 00:12:08.718 "bdev_name": "nvme1n2" 00:12:08.718 }, 00:12:08.718 { 00:12:08.718 "nbd_device": "/dev/nbd11", 00:12:08.718 "bdev_name": "nvme1n3" 00:12:08.718 }, 00:12:08.718 { 00:12:08.718 "nbd_device": "/dev/nbd12", 00:12:08.718 "bdev_name": "nvme2n1" 00:12:08.718 }, 00:12:08.718 { 00:12:08.718 "nbd_device": "/dev/nbd13", 00:12:08.718 "bdev_name": "nvme3n1" 00:12:08.718 } 00:12:08.718 ]' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:08.979 { 00:12:08.979 "nbd_device": "/dev/nbd0", 00:12:08.979 "bdev_name": "nvme0n1" 00:12:08.979 }, 00:12:08.979 { 00:12:08.979 "nbd_device": "/dev/nbd1", 00:12:08.979 "bdev_name": "nvme1n1" 00:12:08.979 }, 00:12:08.979 { 00:12:08.979 "nbd_device": "/dev/nbd10", 00:12:08.979 "bdev_name": "nvme1n2" 00:12:08.979 }, 00:12:08.979 { 00:12:08.979 "nbd_device": "/dev/nbd11", 00:12:08.979 "bdev_name": "nvme1n3" 00:12:08.979 }, 00:12:08.979 { 00:12:08.979 "nbd_device": "/dev/nbd12", 00:12:08.979 "bdev_name": "nvme2n1" 00:12:08.979 }, 00:12:08.979 { 00:12:08.979 "nbd_device": "/dev/nbd13", 00:12:08.979 "bdev_name": "nvme3n1" 00:12:08.979 } 00:12:08.979 ]' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:08.979 /dev/nbd1 00:12:08.979 /dev/nbd10 00:12:08.979 /dev/nbd11 00:12:08.979 /dev/nbd12 00:12:08.979 /dev/nbd13' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:08.979 /dev/nbd1 00:12:08.979 /dev/nbd10 00:12:08.979 /dev/nbd11 00:12:08.979 /dev/nbd12 00:12:08.979 /dev/nbd13' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@65 -- # count=6 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@95 -- # count=6 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:08.979 256+0 records in 00:12:08.979 256+0 records out 00:12:08.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00932838 s, 112 MB/s 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:08.979 256+0 records in 00:12:08.979 256+0 records out 00:12:08.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0965909 s, 10.9 MB/s 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.979 13:16:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:09.240 256+0 records in 00:12:09.240 256+0 records out 00:12:09.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157547 s, 6.7 MB/s 00:12:09.240 13:16:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.240 13:16:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:09.240 256+0 records in 00:12:09.240 256+0 records out 00:12:09.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180233 s, 5.8 MB/s 00:12:09.240 13:16:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.240 13:16:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:09.505 256+0 records in 00:12:09.505 256+0 records out 00:12:09.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208902 s, 5.0 MB/s 00:12:09.505 13:16:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.505 13:16:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:09.764 256+0 records in 00:12:09.764 256+0 records out 00:12:09.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.27137 s, 3.9 MB/s 00:12:09.764 13:16:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.764 13:16:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:10.025 256+0 records in 00:12:10.025 256+0 records out 00:12:10.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235595 s, 4.5 MB/s 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@51 -- # local i 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.025 13:16:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@41 -- # break 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.287 13:16:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@41 -- # break 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.547 13:16:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@41 -- # break 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.804 13:16:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@41 -- # break 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@41 -- # break 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.062 13:16:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@41 -- # break 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.321 13:16:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:11.580 13:16:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:11.580 13:16:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:11.580 13:16:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@65 -- # true 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@65 -- # count=0 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@104 -- # count=0 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@109 -- # return 0 00:12:11.580 13:16:26 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:11.580 13:16:26 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:11.838 malloc_lvol_verify 00:12:11.838 13:16:26 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:11.838 0f1e58bf-9a58-4f3b-8829-c48acdfe7a18 00:12:11.838 13:16:26 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:12.096 87a71ae0-3eba-4886-8cd3-0feb61362cba 00:12:12.096 13:16:26 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:12.355 /dev/nbd0 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:12.355 mke2fs 1.47.0 (5-Feb-2023) 00:12:12.355 Discarding device blocks: 0/4096 done 00:12:12.355 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:12.355 00:12:12.355 Allocating group tables: 0/1 done 00:12:12.355 Writing inode tables: 0/1 done 00:12:12.355 Creating journal (1024 blocks): done 00:12:12.355 Writing superblocks and filesystem accounting information: 0/1 done 00:12:12.355 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@51 -- # local i 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.355 13:16:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:12.613 13:16:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@41 -- # break 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:12.613 13:16:27 -- bdev/nbd_common.sh@147 -- # return 0 00:12:12.613 13:16:27 -- bdev/blockdev.sh@324 -- # killprocess 67671 00:12:12.613 13:16:27 -- common/autotest_common.sh@936 -- # '[' -z 67671 ']' 00:12:12.613 13:16:27 -- common/autotest_common.sh@940 -- # kill -0 67671 00:12:12.613 13:16:27 -- common/autotest_common.sh@941 -- # uname 00:12:12.613 13:16:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:12.613 13:16:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67671 00:12:12.613 13:16:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:12.613 killing process with pid 67671 00:12:12.614 13:16:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:12.614 13:16:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67671' 00:12:12.614 13:16:27 -- common/autotest_common.sh@955 -- # kill 67671 00:12:12.614 13:16:27 -- common/autotest_common.sh@960 -- # wait 67671 00:12:13.182 13:16:27 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:13.182 00:12:13.182 real 0m9.887s 00:12:13.182 user 0m13.363s 00:12:13.182 sys 0m3.410s 00:12:13.182 13:16:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.182 ************************************ 00:12:13.182 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.182 END TEST bdev_nbd 00:12:13.182 ************************************ 00:12:13.182 13:16:27 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:13.182 13:16:27 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:13.182 13:16:27 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:13.182 13:16:27 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:13.182 13:16:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.182 13:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.182 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.182 ************************************ 00:12:13.182 START TEST bdev_fio 00:12:13.182 ************************************ 00:12:13.182 13:16:27 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:13.182 13:16:27 -- bdev/blockdev.sh@329 -- # local env_context 00:12:13.182 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:13.182 13:16:27 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:13.182 13:16:27 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:13.182 13:16:27 -- bdev/blockdev.sh@337 -- # echo '' 00:12:13.182 13:16:27 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:13.443 13:16:27 -- bdev/blockdev.sh@337 -- # env_context= 00:12:13.444 13:16:27 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.444 13:16:27 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:13.444 13:16:27 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:13.444 13:16:27 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:13.444 13:16:27 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:13.444 13:16:27 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.444 13:16:27 -- common/autotest_common.sh@1290 -- # cat 00:12:13.444 13:16:27 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1303 -- # cat 00:12:13.444 13:16:27 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:13.444 13:16:27 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:13.444 13:16:27 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:13.444 13:16:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:13.444 13:16:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:13.444 13:16:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:13.444 13:16:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:13.444 13:16:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:13.444 13:16:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:13.444 13:16:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:13.444 13:16:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:13.444 13:16:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:13.444 13:16:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:13.444 13:16:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:13.444 13:16:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:13.444 13:16:27 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:13.444 13:16:27 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:13.444 13:16:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.444 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.444 ************************************ 00:12:13.444 START TEST bdev_fio_rw_verify 00:12:13.444 ************************************ 00:12:13.444 13:16:27 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:13.444 13:16:27 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:13.444 13:16:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:13.444 13:16:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:13.444 13:16:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:13.444 13:16:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:13.444 13:16:27 -- common/autotest_common.sh@1330 -- # shift 00:12:13.444 13:16:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:13.444 13:16:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:13.444 13:16:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:13.444 13:16:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:13.444 13:16:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:13.444 13:16:27 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:13.444 13:16:27 -- common/autotest_common.sh@1336 -- # break 00:12:13.444 13:16:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:13.444 13:16:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:13.444 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:13.444 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:13.444 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:13.444 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:13.444 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:13.444 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:13.444 fio-3.35 00:12:13.444 Starting 6 threads 00:12:25.683 00:12:25.683 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68064: Mon Dec 16 13:16:38 2024 00:12:25.683 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(535MiB/10002msec) 00:12:25.683 slat (usec): min=2, max=2675, avg= 6.78, stdev=19.66 00:12:25.683 clat (usec): min=104, max=791158, avg=1497.27, stdev=6085.69 00:12:25.683 lat (usec): min=107, max=791169, avg=1504.05, stdev=6085.81 00:12:25.683 clat percentiles (usec): 00:12:25.683 | 50.000th=[ 1319], 99.000th=[ 4047], 99.900th=[ 5735], 00:12:25.683 | 99.990th=[ 7898], 99.999th=[792724] 00:12:25.683 write: IOPS=13.8k, BW=54.0MiB/s (56.7MB/s)(541MiB/10002msec); 0 zone resets 00:12:25.683 slat (usec): min=10, max=3861, avg=40.66, stdev=142.66 00:12:25.683 clat (usec): min=76, max=8682, avg=1658.69, stdev=870.80 00:12:25.683 lat (usec): min=93, max=8738, avg=1699.36, stdev=884.17 00:12:25.683 clat percentiles (usec): 00:12:25.683 | 50.000th=[ 1516], 99.000th=[ 4359], 99.900th=[ 6128], 99.990th=[ 7570], 00:12:25.683 | 99.999th=[ 8586] 00:12:25.683 bw ( KiB/s): min=44779, max=82719, per=100.00%, avg=56126.06, stdev=1545.94, samples=113 00:12:25.683 iops : min=11191, max=20677, avg=14029.60, stdev=386.51, samples=113 00:12:25.683 lat (usec) : 100=0.01%, 250=0.97%, 500=5.71%, 750=9.18%, 1000=11.23% 00:12:25.683 lat (msec) : 2=48.02%, 4=23.51%, 10=1.37%, 20=0.01%, 1000=0.01% 00:12:25.683 cpu : usr=46.65%, sys=31.37%, ctx=5205, majf=0, minf=15826 00:12:25.683 IO depths : 1=11.5%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.683 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.683 issued rwts: total=136904,138372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.683 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:25.683 00:12:25.683 Run status group 0 (all jobs): 00:12:25.683 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=535MiB (561MB), run=10002-10002msec 00:12:25.683 WRITE: bw=54.0MiB/s (56.7MB/s), 54.0MiB/s-54.0MiB/s (56.7MB/s-56.7MB/s), io=541MiB (567MB), run=10002-10002msec 00:12:25.683 ----------------------------------------------------- 00:12:25.683 Suppressions used: 00:12:25.683 count bytes template 00:12:25.683 6 48 /usr/src/fio/parse.c 00:12:25.683 1390 133440 /usr/src/fio/iolog.c 00:12:25.683 1 8 libtcmalloc_minimal.so 00:12:25.683 1 904 libcrypto.so 00:12:25.683 ----------------------------------------------------- 00:12:25.683 00:12:25.683 00:12:25.683 real 0m11.820s 00:12:25.683 user 0m29.508s 00:12:25.683 sys 0m19.165s 00:12:25.683 ************************************ 00:12:25.683 END TEST bdev_fio_rw_verify 00:12:25.683 ************************************ 00:12:25.683 13:16:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:25.683 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.684 13:16:39 -- bdev/blockdev.sh@348 -- # rm -f 00:12:25.684 13:16:39 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:25.684 13:16:39 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:25.684 13:16:39 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:25.684 13:16:39 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:25.684 13:16:39 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:25.684 13:16:39 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:25.684 13:16:39 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:25.684 13:16:39 -- common/autotest_common.sh@1290 -- # cat 00:12:25.684 13:16:39 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:25.684 13:16:39 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4b2cd5c3-e402-42b1-ba2f-6cb89f09614f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4b2cd5c3-e402-42b1-ba2f-6cb89f09614f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "aa6adfe5-4a94-49b2-91f2-1e3c5e050003"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa6adfe5-4a94-49b2-91f2-1e3c5e050003",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "e9551fb2-da2f-455e-851f-7267814389b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9551fb2-da2f-455e-851f-7267814389b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "8c925fce-31c2-4efc-9b60-7295d34ebab7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c925fce-31c2-4efc-9b60-7295d34ebab7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "13fbe4dc-5bf5-4546-8517-38a43e678bb0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "13fbe4dc-5bf5-4546-8517-38a43e678bb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0f352e12-9c98-4a8d-860d-f9fa0d9e6a17"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0f352e12-9c98-4a8d-860d-f9fa0d9e6a17",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:25.684 13:16:39 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:25.684 13:16:39 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:25.684 13:16:39 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:25.684 /home/vagrant/spdk_repo/spdk 00:12:25.684 13:16:39 -- bdev/blockdev.sh@360 -- # popd 00:12:25.684 13:16:39 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:25.684 13:16:39 -- bdev/blockdev.sh@362 -- # return 0 00:12:25.684 00:12:25.684 real 0m11.993s 00:12:25.684 user 0m29.585s 00:12:25.684 sys 0m19.241s 00:12:25.684 ************************************ 00:12:25.684 END TEST bdev_fio 00:12:25.684 ************************************ 00:12:25.684 13:16:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:25.684 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.684 13:16:39 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:25.684 13:16:39 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:25.684 13:16:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.684 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.684 ************************************ 00:12:25.684 START TEST bdev_verify 00:12:25.684 ************************************ 00:12:25.684 13:16:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:25.684 [2024-12-16 13:16:39.880530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:25.684 [2024-12-16 13:16:39.880683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68235 ] 00:12:25.684 [2024-12-16 13:16:40.029724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:25.946 [2024-12-16 13:16:40.269709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.946 [2024-12-16 13:16:40.269752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.207 Running I/O for 5 seconds... 00:12:31.499 00:12:31.499 Latency(us) 00:12:31.499 [2024-12-16T13:16:46.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x0 length 0x20000 00:12:31.499 nvme0n1 : 5.09 1955.70 7.64 0.00 0.00 65141.67 13006.38 92355.35 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x20000 length 0x20000 00:12:31.499 nvme0n1 : 5.09 2220.26 8.67 0.00 0.00 57463.08 14821.22 86709.17 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x0 length 0x80000 00:12:31.499 nvme1n1 : 5.09 1802.94 7.04 0.00 0.00 70404.89 4360.66 87919.06 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x80000 length 0x80000 00:12:31.499 nvme1n1 : 5.11 2094.33 8.18 0.00 0.00 60624.72 5116.85 78643.20 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x0 length 0x80000 00:12:31.499 nvme1n2 : 5.10 1806.51 7.06 0.00 0.00 70144.62 11494.01 90742.15 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x80000 length 0x80000 00:12:31.499 nvme1n2 : 5.10 1932.78 7.55 0.00 0.00 65670.35 15627.82 83079.48 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x0 length 0x80000 00:12:31.499 nvme1n3 : 5.11 1827.23 7.14 0.00 0.00 69240.54 7108.14 93968.54 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x80000 length 0x80000 00:12:31.499 nvme1n3 : 5.10 1982.86 7.75 0.00 0.00 63934.27 14014.62 79449.80 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x0 length 0xbd0bd 00:12:31.499 nvme2n1 : 5.11 1855.53 7.25 0.00 0.00 68070.26 16131.94 88725.66 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:31.499 nvme2n1 : 5.10 2044.24 7.99 0.00 0.00 61978.48 7763.50 77836.60 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0x0 length 0xa0000 00:12:31.499 nvme3n1 : 5.11 1942.09 7.59 0.00 0.00 65193.03 3125.56 83886.08 00:12:31.499 [2024-12-16T13:16:46.073Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:31.499 Verification LBA range: start 0xa0000 length 0xa0000 00:12:31.499 nvme3n1 : 5.10 2100.82 8.21 0.00 0.00 60223.94 5268.09 77030.01 00:12:31.499 [2024-12-16T13:16:46.073Z] =================================================================================================================== 00:12:31.499 [2024-12-16T13:16:46.073Z] Total : 23565.31 92.05 0.00 0.00 64585.74 3125.56 93968.54 00:12:32.443 00:12:32.443 real 0m6.946s 00:12:32.443 user 0m8.849s 00:12:32.443 sys 0m3.128s 00:12:32.443 13:16:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.443 ************************************ 00:12:32.443 END TEST bdev_verify 00:12:32.443 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:32.443 ************************************ 00:12:32.444 13:16:46 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:32.444 13:16:46 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:32.444 13:16:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.444 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:32.444 ************************************ 00:12:32.444 START TEST bdev_verify_big_io 00:12:32.444 ************************************ 00:12:32.444 13:16:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:32.444 [2024-12-16 13:16:46.904436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.444 [2024-12-16 13:16:46.904596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68339 ] 00:12:32.712 [2024-12-16 13:16:47.061870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:32.977 [2024-12-16 13:16:47.288433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.977 [2024-12-16 13:16:47.288561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.238 Running I/O for 5 seconds... 00:12:39.827 00:12:39.827 Latency(us) 00:12:39.827 [2024-12-16T13:16:54.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x0 length 0x2000 00:12:39.827 nvme0n1 : 5.70 197.93 12.37 0.00 0.00 629258.12 74610.22 909841.33 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x2000 length 0x2000 00:12:39.827 nvme0n1 : 5.67 292.82 18.30 0.00 0.00 428888.28 45572.73 645277.54 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x0 length 0x8000 00:12:39.827 nvme1n1 : 5.71 227.30 14.21 0.00 0.00 529637.22 132281.90 696899.74 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x8000 length 0x8000 00:12:39.827 nvme1n1 : 5.65 277.26 17.33 0.00 0.00 430369.07 48799.11 587202.56 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x0 length 0x8000 00:12:39.827 nvme1n2 : 5.71 212.59 13.29 0.00 0.00 555463.94 25206.15 716258.07 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x8000 length 0x8000 00:12:39.827 nvme1n2 : 5.67 308.13 19.26 0.00 0.00 395792.54 18148.43 477505.38 00:12:39.827 [2024-12-16T13:16:54.401Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:39.827 Verification LBA range: start 0x0 length 0x8000 00:12:39.828 nvme1n3 : 5.71 212.64 13.29 0.00 0.00 552852.88 17543.48 754974.72 00:12:39.828 [2024-12-16T13:16:54.402Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:39.828 Verification LBA range: start 0x8000 length 0x8000 00:12:39.828 nvme1n3 : 5.67 308.73 19.30 0.00 0.00 387167.85 48395.82 461373.44 00:12:39.828 [2024-12-16T13:16:54.402Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:39.828 Verification LBA range: start 0x0 length 0xbd0b 00:12:39.828 nvme2n1 : 5.71 231.15 14.45 0.00 0.00 502594.68 15829.46 845313.58 00:12:39.828 [2024-12-16T13:16:54.402Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:39.828 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:39.828 nvme2n1 : 5.66 303.06 18.94 0.00 0.00 386884.68 65334.35 519448.42 00:12:39.828 [2024-12-16T13:16:54.402Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:39.828 Verification LBA range: start 0x0 length 0xa000 00:12:39.828 nvme3n1 : 5.72 278.66 17.42 0.00 0.00 410080.01 13712.15 432335.95 00:12:39.828 [2024-12-16T13:16:54.402Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:39.828 Verification LBA range: start 0xa000 length 0xa000 00:12:39.828 nvme3n1 : 5.68 321.87 20.12 0.00 0.00 358044.01 5847.83 512995.64 00:12:39.828 [2024-12-16T13:16:54.402Z] =================================================================================================================== 00:12:39.828 [2024-12-16T13:16:54.402Z] Total : 3172.13 198.26 0.00 0.00 450982.05 5847.83 909841.33 00:12:40.400 00:12:40.400 real 0m7.886s 00:12:40.400 user 0m13.888s 00:12:40.400 sys 0m0.645s 00:12:40.400 13:16:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:40.400 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:40.400 ************************************ 00:12:40.400 END TEST bdev_verify_big_io 00:12:40.400 ************************************ 00:12:40.400 13:16:54 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.400 13:16:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:40.400 13:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.400 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:40.400 ************************************ 00:12:40.400 START TEST bdev_write_zeroes 00:12:40.400 ************************************ 00:12:40.400 13:16:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.400 [2024-12-16 13:16:54.864658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:40.400 [2024-12-16 13:16:54.864796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68450 ] 00:12:40.671 [2024-12-16 13:16:55.017893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.932 [2024-12-16 13:16:55.281566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.193 Running I/O for 1 seconds... 00:12:42.579 00:12:42.579 Latency(us) 00:12:42.579 [2024-12-16T13:16:57.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.579 [2024-12-16T13:16:57.153Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:42.579 nvme0n1 : 1.01 11877.93 46.40 0.00 0.00 10764.73 9074.22 19660.80 00:12:42.579 [2024-12-16T13:16:57.153Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:42.579 nvme1n1 : 1.01 11863.22 46.34 0.00 0.00 10763.45 9124.63 18249.26 00:12:42.579 [2024-12-16T13:16:57.153Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:42.579 nvme1n2 : 1.02 11849.32 46.29 0.00 0.00 10750.55 9124.63 17946.78 00:12:42.579 [2024-12-16T13:16:57.153Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:42.579 nvme1n3 : 1.02 11835.45 46.23 0.00 0.00 10739.33 9124.63 18350.08 00:12:42.579 [2024-12-16T13:16:57.153Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:42.579 nvme2n1 : 1.03 12081.37 47.19 0.00 0.00 10500.69 5242.88 19257.50 00:12:42.579 [2024-12-16T13:16:57.153Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:42.579 nvme3n1 : 1.03 11849.28 46.29 0.00 0.00 10615.77 4965.61 21374.82 00:12:42.579 [2024-12-16T13:16:57.153Z] =================================================================================================================== 00:12:42.579 [2024-12-16T13:16:57.153Z] Total : 71356.57 278.74 0.00 0.00 10688.05 4965.61 21374.82 00:12:43.151 00:12:43.151 real 0m2.855s 00:12:43.151 user 0m2.138s 00:12:43.151 sys 0m0.537s 00:12:43.151 ************************************ 00:12:43.151 END TEST bdev_write_zeroes 00:12:43.151 ************************************ 00:12:43.151 13:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:43.151 13:16:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.151 13:16:57 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.151 13:16:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:43.151 13:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.151 13:16:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.413 ************************************ 00:12:43.413 START TEST bdev_json_nonenclosed 00:12:43.413 ************************************ 00:12:43.413 13:16:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.413 [2024-12-16 13:16:57.792928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.413 [2024-12-16 13:16:57.793069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68503 ] 00:12:43.413 [2024-12-16 13:16:57.942775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.673 [2024-12-16 13:16:58.160892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.673 [2024-12-16 13:16:58.161079] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:43.673 [2024-12-16 13:16:58.161108] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:43.935 ************************************ 00:12:43.935 00:12:43.935 real 0m0.743s 00:12:43.935 user 0m0.518s 00:12:43.935 sys 0m0.117s 00:12:43.935 13:16:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:43.935 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:12:43.935 END TEST bdev_json_nonenclosed 00:12:43.935 ************************************ 00:12:44.197 13:16:58 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:44.197 13:16:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:44.197 13:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.197 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:12:44.197 ************************************ 00:12:44.197 START TEST bdev_json_nonarray 00:12:44.197 ************************************ 00:12:44.197 13:16:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:44.197 [2024-12-16 13:16:58.608052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:44.197 [2024-12-16 13:16:58.608193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68534 ] 00:12:44.197 [2024-12-16 13:16:58.763306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.458 [2024-12-16 13:16:58.982120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.458 [2024-12-16 13:16:58.982586] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:44.458 [2024-12-16 13:16:58.982616] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:45.030 ************************************ 00:12:45.030 END TEST bdev_json_nonarray 00:12:45.030 ************************************ 00:12:45.030 00:12:45.030 real 0m0.753s 00:12:45.030 user 0m0.514s 00:12:45.030 sys 0m0.132s 00:12:45.030 13:16:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.030 13:16:59 -- common/autotest_common.sh@10 -- # set +x 00:12:45.030 13:16:59 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:45.030 13:16:59 -- bdev/blockdev.sh@809 -- # cleanup 00:12:45.030 13:16:59 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:45.030 13:16:59 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:45.030 13:16:59 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:45.030 13:16:59 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:45.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:47.360 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.360 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.360 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.932 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.195 00:12:48.195 real 0m57.421s 00:12:48.195 user 1m24.618s 00:12:48.195 sys 0m35.519s 00:12:48.195 ************************************ 00:12:48.195 END TEST blockdev_xnvme 00:12:48.195 ************************************ 00:12:48.195 13:17:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:48.195 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.195 13:17:02 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:48.195 13:17:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:48.195 13:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.195 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.195 ************************************ 00:12:48.195 START TEST ublk 00:12:48.195 ************************************ 00:12:48.195 13:17:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:48.195 * Looking for test storage... 00:12:48.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:48.195 13:17:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:48.195 13:17:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:48.195 13:17:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:48.195 13:17:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:48.195 13:17:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:48.195 13:17:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:48.195 13:17:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:48.195 13:17:02 -- scripts/common.sh@335 -- # IFS=.-: 00:12:48.195 13:17:02 -- scripts/common.sh@335 -- # read -ra ver1 00:12:48.195 13:17:02 -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.195 13:17:02 -- scripts/common.sh@336 -- # read -ra ver2 00:12:48.195 13:17:02 -- scripts/common.sh@337 -- # local 'op=<' 00:12:48.195 13:17:02 -- scripts/common.sh@339 -- # ver1_l=2 00:12:48.195 13:17:02 -- scripts/common.sh@340 -- # ver2_l=1 00:12:48.195 13:17:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:48.195 13:17:02 -- scripts/common.sh@343 -- # case "$op" in 00:12:48.195 13:17:02 -- scripts/common.sh@344 -- # : 1 00:12:48.195 13:17:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:48.195 13:17:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.195 13:17:02 -- scripts/common.sh@364 -- # decimal 1 00:12:48.195 13:17:02 -- scripts/common.sh@352 -- # local d=1 00:12:48.195 13:17:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.195 13:17:02 -- scripts/common.sh@354 -- # echo 1 00:12:48.195 13:17:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:48.195 13:17:02 -- scripts/common.sh@365 -- # decimal 2 00:12:48.195 13:17:02 -- scripts/common.sh@352 -- # local d=2 00:12:48.195 13:17:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.195 13:17:02 -- scripts/common.sh@354 -- # echo 2 00:12:48.195 13:17:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:48.195 13:17:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:48.195 13:17:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:48.195 13:17:02 -- scripts/common.sh@367 -- # return 0 00:12:48.195 13:17:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.195 13:17:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.195 --rc genhtml_branch_coverage=1 00:12:48.195 --rc genhtml_function_coverage=1 00:12:48.195 --rc genhtml_legend=1 00:12:48.195 --rc geninfo_all_blocks=1 00:12:48.195 --rc geninfo_unexecuted_blocks=1 00:12:48.195 00:12:48.195 ' 00:12:48.195 13:17:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.195 --rc genhtml_branch_coverage=1 00:12:48.195 --rc genhtml_function_coverage=1 00:12:48.195 --rc genhtml_legend=1 00:12:48.195 --rc geninfo_all_blocks=1 00:12:48.195 --rc geninfo_unexecuted_blocks=1 00:12:48.195 00:12:48.195 ' 00:12:48.195 13:17:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.195 --rc genhtml_branch_coverage=1 00:12:48.195 --rc genhtml_function_coverage=1 00:12:48.195 --rc genhtml_legend=1 00:12:48.195 --rc geninfo_all_blocks=1 00:12:48.195 --rc geninfo_unexecuted_blocks=1 00:12:48.195 00:12:48.195 ' 00:12:48.195 13:17:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:48.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.195 --rc genhtml_branch_coverage=1 00:12:48.195 --rc genhtml_function_coverage=1 00:12:48.195 --rc genhtml_legend=1 00:12:48.195 --rc geninfo_all_blocks=1 00:12:48.195 --rc geninfo_unexecuted_blocks=1 00:12:48.195 00:12:48.195 ' 00:12:48.195 13:17:02 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:48.195 13:17:02 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:48.195 13:17:02 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:48.195 13:17:02 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:48.195 13:17:02 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:48.195 13:17:02 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:48.195 13:17:02 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:48.195 13:17:02 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:48.195 13:17:02 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:48.195 13:17:02 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:48.195 13:17:02 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:48.195 13:17:02 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:48.195 13:17:02 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:48.195 13:17:02 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:48.195 13:17:02 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:48.195 13:17:02 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:48.195 13:17:02 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:48.195 13:17:02 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:48.195 13:17:02 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:48.456 13:17:02 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:48.456 13:17:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:48.456 13:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.456 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.456 ************************************ 00:12:48.456 START TEST test_save_ublk_config 00:12:48.456 ************************************ 00:12:48.456 13:17:02 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:48.456 13:17:02 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:48.456 13:17:02 -- ublk/ublk.sh@103 -- # tgtpid=68837 00:12:48.456 13:17:02 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:48.456 13:17:02 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:48.456 13:17:02 -- ublk/ublk.sh@106 -- # waitforlisten 68837 00:12:48.456 13:17:02 -- common/autotest_common.sh@829 -- # '[' -z 68837 ']' 00:12:48.456 13:17:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.456 13:17:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.456 13:17:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.456 13:17:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.456 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 [2024-12-16 13:17:02.867181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:48.457 [2024-12-16 13:17:02.867537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68837 ] 00:12:48.457 [2024-12-16 13:17:03.013320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.718 [2024-12-16 13:17:03.246659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:48.718 [2024-12-16 13:17:03.247115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.103 13:17:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.103 13:17:04 -- common/autotest_common.sh@862 -- # return 0 00:12:50.103 13:17:04 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:50.103 13:17:04 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:50.103 13:17:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.103 13:17:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.103 [2024-12-16 13:17:04.398477] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:50.103 malloc0 00:12:50.103 [2024-12-16 13:17:04.469777] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:50.103 [2024-12-16 13:17:04.469879] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:50.103 [2024-12-16 13:17:04.469888] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:50.103 [2024-12-16 13:17:04.469898] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:50.103 [2024-12-16 13:17:04.478749] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:50.103 [2024-12-16 13:17:04.478784] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:50.103 [2024-12-16 13:17:04.485664] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:50.103 [2024-12-16 13:17:04.485786] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:50.103 [2024-12-16 13:17:04.502658] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:50.103 0 00:12:50.103 13:17:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.103 13:17:04 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:50.103 13:17:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.103 13:17:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.365 13:17:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.365 13:17:04 -- ublk/ublk.sh@115 -- # config='{ 00:12:50.365 "subsystems": [ 00:12:50.365 { 00:12:50.365 "subsystem": "iobuf", 00:12:50.365 "config": [ 00:12:50.365 { 00:12:50.365 "method": "iobuf_set_options", 00:12:50.365 "params": { 00:12:50.365 "small_pool_count": 8192, 00:12:50.365 "large_pool_count": 1024, 00:12:50.365 "small_bufsize": 8192, 00:12:50.365 "large_bufsize": 135168 00:12:50.365 } 00:12:50.365 } 00:12:50.365 ] 00:12:50.365 }, 00:12:50.365 { 00:12:50.365 "subsystem": "sock", 00:12:50.365 "config": [ 00:12:50.365 { 00:12:50.365 "method": "sock_impl_set_options", 00:12:50.365 "params": { 00:12:50.365 "impl_name": "posix", 00:12:50.365 "recv_buf_size": 2097152, 00:12:50.365 "send_buf_size": 2097152, 00:12:50.365 "enable_recv_pipe": true, 00:12:50.365 "enable_quickack": false, 00:12:50.365 "enable_placement_id": 0, 00:12:50.365 "enable_zerocopy_send_server": true, 00:12:50.365 "enable_zerocopy_send_client": false, 00:12:50.365 "zerocopy_threshold": 0, 00:12:50.365 "tls_version": 0, 00:12:50.365 "enable_ktls": false 00:12:50.365 } 00:12:50.365 }, 00:12:50.365 { 00:12:50.365 "method": "sock_impl_set_options", 00:12:50.365 "params": { 00:12:50.365 "impl_name": "ssl", 00:12:50.365 "recv_buf_size": 4096, 00:12:50.365 "send_buf_size": 4096, 00:12:50.365 "enable_recv_pipe": true, 00:12:50.365 "enable_quickack": false, 00:12:50.365 "enable_placement_id": 0, 00:12:50.365 "enable_zerocopy_send_server": true, 00:12:50.365 "enable_zerocopy_send_client": false, 00:12:50.365 "zerocopy_threshold": 0, 00:12:50.365 "tls_version": 0, 00:12:50.365 "enable_ktls": false 00:12:50.365 } 00:12:50.365 } 00:12:50.365 ] 00:12:50.365 }, 00:12:50.365 { 00:12:50.365 "subsystem": "vmd", 00:12:50.365 "config": [] 00:12:50.365 }, 00:12:50.365 { 00:12:50.365 "subsystem": "accel", 00:12:50.365 "config": [ 00:12:50.365 { 00:12:50.365 "method": "accel_set_options", 00:12:50.365 "params": { 00:12:50.365 "small_cache_size": 128, 00:12:50.365 "large_cache_size": 16, 00:12:50.365 "task_count": 2048, 00:12:50.365 "sequence_count": 2048, 00:12:50.365 "buf_count": 2048 00:12:50.365 } 00:12:50.365 } 00:12:50.365 ] 00:12:50.365 }, 00:12:50.365 { 00:12:50.365 "subsystem": "bdev", 00:12:50.365 "config": [ 00:12:50.365 { 00:12:50.365 "method": "bdev_set_options", 00:12:50.365 "params": { 00:12:50.365 "bdev_io_pool_size": 65535, 00:12:50.365 "bdev_io_cache_size": 256, 00:12:50.365 "bdev_auto_examine": true, 00:12:50.365 "iobuf_small_cache_size": 128, 00:12:50.365 "iobuf_large_cache_size": 16 00:12:50.365 } 00:12:50.365 }, 00:12:50.365 { 00:12:50.365 "method": "bdev_raid_set_options", 00:12:50.366 "params": { 00:12:50.366 "process_window_size_kb": 1024 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "bdev_iscsi_set_options", 00:12:50.366 "params": { 00:12:50.366 "timeout_sec": 30 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "bdev_nvme_set_options", 00:12:50.366 "params": { 00:12:50.366 "action_on_timeout": "none", 00:12:50.366 "timeout_us": 0, 00:12:50.366 "timeout_admin_us": 0, 00:12:50.366 "keep_alive_timeout_ms": 10000, 00:12:50.366 "transport_retry_count": 4, 00:12:50.366 "arbitration_burst": 0, 00:12:50.366 "low_priority_weight": 0, 00:12:50.366 "medium_priority_weight": 0, 00:12:50.366 "high_priority_weight": 0, 00:12:50.366 "nvme_adminq_poll_period_us": 10000, 00:12:50.366 "nvme_ioq_poll_period_us": 0, 00:12:50.366 "io_queue_requests": 0, 00:12:50.366 "delay_cmd_submit": true, 00:12:50.366 "bdev_retry_count": 3, 00:12:50.366 "transport_ack_timeout": 0, 00:12:50.366 "ctrlr_loss_timeout_sec": 0, 00:12:50.366 "reconnect_delay_sec": 0, 00:12:50.366 "fast_io_fail_timeout_sec": 0, 00:12:50.366 "generate_uuids": false, 00:12:50.366 "transport_tos": 0, 00:12:50.366 "io_path_stat": false, 00:12:50.366 "allow_accel_sequence": false 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "bdev_nvme_set_hotplug", 00:12:50.366 "params": { 00:12:50.366 "period_us": 100000, 00:12:50.366 "enable": false 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "bdev_malloc_create", 00:12:50.366 "params": { 00:12:50.366 "name": "malloc0", 00:12:50.366 "num_blocks": 8192, 00:12:50.366 "block_size": 4096, 00:12:50.366 "physical_block_size": 4096, 00:12:50.366 "uuid": "a39b336c-f3ea-464a-b571-03fc3c48d60d", 00:12:50.366 "optimal_io_boundary": 0 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "bdev_wait_for_examine" 00:12:50.366 } 00:12:50.366 ] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "scsi", 00:12:50.366 "config": null 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "scheduler", 00:12:50.366 "config": [ 00:12:50.366 { 00:12:50.366 "method": "framework_set_scheduler", 00:12:50.366 "params": { 00:12:50.366 "name": "static" 00:12:50.366 } 00:12:50.366 } 00:12:50.366 ] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "vhost_scsi", 00:12:50.366 "config": [] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "vhost_blk", 00:12:50.366 "config": [] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "ublk", 00:12:50.366 "config": [ 00:12:50.366 { 00:12:50.366 "method": "ublk_create_target", 00:12:50.366 "params": { 00:12:50.366 "cpumask": "1" 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "ublk_start_disk", 00:12:50.366 "params": { 00:12:50.366 "bdev_name": "malloc0", 00:12:50.366 "ublk_id": 0, 00:12:50.366 "num_queues": 1, 00:12:50.366 "queue_depth": 128 00:12:50.366 } 00:12:50.366 } 00:12:50.366 ] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "nbd", 00:12:50.366 "config": [] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "nvmf", 00:12:50.366 "config": [ 00:12:50.366 { 00:12:50.366 "method": "nvmf_set_config", 00:12:50.366 "params": { 00:12:50.366 "discovery_filter": "match_any", 00:12:50.366 "admin_cmd_passthru": { 00:12:50.366 "identify_ctrlr": false 00:12:50.366 } 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "nvmf_set_max_subsystems", 00:12:50.366 "params": { 00:12:50.366 "max_subsystems": 1024 00:12:50.366 } 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "method": "nvmf_set_crdt", 00:12:50.366 "params": { 00:12:50.366 "crdt1": 0, 00:12:50.366 "crdt2": 0, 00:12:50.366 "crdt3": 0 00:12:50.366 } 00:12:50.366 } 00:12:50.366 ] 00:12:50.366 }, 00:12:50.366 { 00:12:50.366 "subsystem": "iscsi", 00:12:50.366 "config": [ 00:12:50.366 { 00:12:50.366 "method": "iscsi_set_options", 00:12:50.366 "params": { 00:12:50.366 "node_base": "iqn.2016-06.io.spdk", 00:12:50.366 "max_sessions": 128, 00:12:50.366 "max_connections_per_session": 2, 00:12:50.366 "max_queue_depth": 64, 00:12:50.366 "default_time2wait": 2, 00:12:50.366 "default_time2retain": 20, 00:12:50.366 "first_burst_length": 8192, 00:12:50.366 "immediate_data": true, 00:12:50.366 "allow_duplicated_isid": false, 00:12:50.366 "error_recovery_level": 0, 00:12:50.366 "nop_timeout": 60, 00:12:50.366 "nop_in_interval": 30, 00:12:50.366 "disable_chap": false, 00:12:50.366 "require_chap": false, 00:12:50.366 "mutual_chap": false, 00:12:50.366 "chap_group": 0, 00:12:50.366 "max_large_datain_per_connection": 64, 00:12:50.366 "max_r2t_per_connection": 4, 00:12:50.366 "pdu_pool_size": 36864, 00:12:50.366 "immediate_data_pool_size": 16384, 00:12:50.366 "data_out_pool_size": 2048 00:12:50.366 } 00:12:50.366 } 00:12:50.366 ] 00:12:50.366 } 00:12:50.366 ] 00:12:50.366 }' 00:12:50.366 13:17:04 -- ublk/ublk.sh@116 -- # killprocess 68837 00:12:50.366 13:17:04 -- common/autotest_common.sh@936 -- # '[' -z 68837 ']' 00:12:50.366 13:17:04 -- common/autotest_common.sh@940 -- # kill -0 68837 00:12:50.366 13:17:04 -- common/autotest_common.sh@941 -- # uname 00:12:50.366 13:17:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.366 13:17:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68837 00:12:50.366 killing process with pid 68837 00:12:50.366 13:17:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.366 13:17:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.366 13:17:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68837' 00:12:50.366 13:17:04 -- common/autotest_common.sh@955 -- # kill 68837 00:12:50.366 13:17:04 -- common/autotest_common.sh@960 -- # wait 68837 00:12:51.756 [2024-12-16 13:17:05.894806] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:51.756 [2024-12-16 13:17:05.934644] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:51.756 [2024-12-16 13:17:05.934837] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:51.756 [2024-12-16 13:17:05.943670] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:51.756 [2024-12-16 13:17:05.943734] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:51.756 [2024-12-16 13:17:05.943750] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:51.756 [2024-12-16 13:17:05.943783] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:51.756 [2024-12-16 13:17:05.943931] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:53.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.132 13:17:07 -- ublk/ublk.sh@119 -- # tgtpid=68899 00:12:53.132 13:17:07 -- ublk/ublk.sh@121 -- # waitforlisten 68899 00:12:53.132 13:17:07 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:53.132 13:17:07 -- common/autotest_common.sh@829 -- # '[' -z 68899 ']' 00:12:53.132 13:17:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.132 13:17:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.132 13:17:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.132 13:17:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.132 13:17:07 -- common/autotest_common.sh@10 -- # set +x 00:12:53.132 13:17:07 -- ublk/ublk.sh@118 -- # echo '{ 00:12:53.132 "subsystems": [ 00:12:53.132 { 00:12:53.132 "subsystem": "iobuf", 00:12:53.132 "config": [ 00:12:53.132 { 00:12:53.132 "method": "iobuf_set_options", 00:12:53.132 "params": { 00:12:53.132 "small_pool_count": 8192, 00:12:53.132 "large_pool_count": 1024, 00:12:53.132 "small_bufsize": 8192, 00:12:53.132 "large_bufsize": 135168 00:12:53.132 } 00:12:53.132 } 00:12:53.132 ] 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "subsystem": "sock", 00:12:53.132 "config": [ 00:12:53.132 { 00:12:53.132 "method": "sock_impl_set_options", 00:12:53.132 "params": { 00:12:53.132 "impl_name": "posix", 00:12:53.132 "recv_buf_size": 2097152, 00:12:53.132 "send_buf_size": 2097152, 00:12:53.132 "enable_recv_pipe": true, 00:12:53.132 "enable_quickack": false, 00:12:53.132 "enable_placement_id": 0, 00:12:53.132 "enable_zerocopy_send_server": true, 00:12:53.132 "enable_zerocopy_send_client": false, 00:12:53.132 "zerocopy_threshold": 0, 00:12:53.132 "tls_version": 0, 00:12:53.132 "enable_ktls": false 00:12:53.132 } 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "method": "sock_impl_set_options", 00:12:53.132 "params": { 00:12:53.132 "impl_name": "ssl", 00:12:53.132 "recv_buf_size": 4096, 00:12:53.132 "send_buf_size": 4096, 00:12:53.132 "enable_recv_pipe": true, 00:12:53.132 "enable_quickack": false, 00:12:53.132 "enable_placement_id": 0, 00:12:53.132 "enable_zerocopy_send_server": true, 00:12:53.132 "enable_zerocopy_send_client": false, 00:12:53.132 "zerocopy_threshold": 0, 00:12:53.132 "tls_version": 0, 00:12:53.132 "enable_ktls": false 00:12:53.132 } 00:12:53.132 } 00:12:53.132 ] 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "subsystem": "vmd", 00:12:53.132 "config": [] 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "subsystem": "accel", 00:12:53.132 "config": [ 00:12:53.132 { 00:12:53.132 "method": "accel_set_options", 00:12:53.132 "params": { 00:12:53.132 "small_cache_size": 128, 00:12:53.132 "large_cache_size": 16, 00:12:53.132 "task_count": 2048, 00:12:53.132 "sequence_count": 2048, 00:12:53.132 "buf_count": 2048 00:12:53.132 } 00:12:53.132 } 00:12:53.132 ] 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "subsystem": "bdev", 00:12:53.132 "config": [ 00:12:53.132 { 00:12:53.132 "method": "bdev_set_options", 00:12:53.132 "params": { 00:12:53.132 "bdev_io_pool_size": 65535, 00:12:53.132 "bdev_io_cache_size": 256, 00:12:53.132 "bdev_auto_examine": true, 00:12:53.132 "iobuf_small_cache_size": 128, 00:12:53.132 "iobuf_large_cache_size": 16 00:12:53.132 } 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "method": "bdev_raid_set_options", 00:12:53.132 "params": { 00:12:53.132 "process_window_size_kb": 1024 00:12:53.132 } 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "method": "bdev_iscsi_set_options", 00:12:53.132 "params": { 00:12:53.132 "timeout_sec": 30 00:12:53.132 } 00:12:53.132 }, 00:12:53.132 { 00:12:53.132 "method": "bdev_nvme_set_options", 00:12:53.132 "params": { 00:12:53.132 "action_on_timeout": "none", 00:12:53.132 "timeout_us": 0, 00:12:53.132 "timeout_admin_us": 0, 00:12:53.132 "keep_alive_timeout_ms": 10000, 00:12:53.132 "transport_retry_count": 4, 00:12:53.132 "arbitration_burst": 0, 00:12:53.132 "low_priority_weight": 0, 00:12:53.132 "medium_priority_weight": 0, 00:12:53.132 "high_priority_weight": 0, 00:12:53.132 "nvme_adminq_poll_period_us": 10000, 00:12:53.132 "nvme_ioq_poll_period_us": 0, 00:12:53.132 "io_queue_requests": 0, 00:12:53.132 "delay_cmd_submit": true, 00:12:53.132 "bdev_retry_count": 3, 00:12:53.132 "transport_ack_timeout": 0, 00:12:53.133 "ctrlr_loss_timeout_sec": 0, 00:12:53.133 "reconnect_delay_sec": 0, 00:12:53.133 "fast_io_fail_timeout_sec": 0, 00:12:53.133 "generate_uuids": false, 00:12:53.133 "transport_tos": 0, 00:12:53.133 "io_path_stat": false, 00:12:53.133 "allow_accel_sequence": false 00:12:53.133 } 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "method": "bdev_nvme_set_hotplug", 00:12:53.133 "params": { 00:12:53.133 "period_us": 100000, 00:12:53.133 "enable": false 00:12:53.133 } 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "method": "bdev_malloc_create", 00:12:53.133 "params": { 00:12:53.133 "name": "malloc0", 00:12:53.133 "num_blocks": 8192, 00:12:53.133 "block_size": 4096, 00:12:53.133 "physical_block_size": 4096, 00:12:53.133 "uuid": "a39b336c-f3ea-464a-b571-03fc3c48d60d", 00:12:53.133 "optimal_io_boundary": 0 00:12:53.133 } 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "method": "bdev_wait_for_examine" 00:12:53.133 } 00:12:53.133 ] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "scsi", 00:12:53.133 "config": null 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "scheduler", 00:12:53.133 "config": [ 00:12:53.133 { 00:12:53.133 "method": "framework_set_scheduler", 00:12:53.133 "params": { 00:12:53.133 "name": "static" 00:12:53.133 } 00:12:53.133 } 00:12:53.133 ] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "vhost_scsi", 00:12:53.133 "config": [] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "vhost_blk", 00:12:53.133 "config": [] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "ublk", 00:12:53.133 "config": [ 00:12:53.133 { 00:12:53.133 "method": "ublk_create_target", 00:12:53.133 "params": { 00:12:53.133 "cpumask": "1" 00:12:53.133 } 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "method": "ublk_start_disk", 00:12:53.133 "params": { 00:12:53.133 "bdev_name": "malloc0", 00:12:53.133 "ublk_id": 0, 00:12:53.133 "num_queues": 1, 00:12:53.133 "queue_depth": 128 00:12:53.133 } 00:12:53.133 } 00:12:53.133 ] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "nbd", 00:12:53.133 "config": [] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "nvmf", 00:12:53.133 "config": [ 00:12:53.133 { 00:12:53.133 "method": "nvmf_set_config", 00:12:53.133 "params": { 00:12:53.133 "discovery_filter": "match_any", 00:12:53.133 "admin_cmd_passthru": { 00:12:53.133 "identify_ctrlr": false 00:12:53.133 } 00:12:53.133 } 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "method": "nvmf_set_max_subsystems", 00:12:53.133 "params": { 00:12:53.133 "max_subsystems": 1024 00:12:53.133 } 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "method": "nvmf_set_crdt", 00:12:53.133 "params": { 00:12:53.133 "crdt1": 0, 00:12:53.133 "crdt2": 0, 00:12:53.133 "crdt3": 0 00:12:53.133 } 00:12:53.133 } 00:12:53.133 ] 00:12:53.133 }, 00:12:53.133 { 00:12:53.133 "subsystem": "iscsi", 00:12:53.133 "config": [ 00:12:53.133 { 00:12:53.133 "method": "iscsi_set_options", 00:12:53.133 "params": { 00:12:53.133 "node_base": "iqn.2016-06.io.spdk", 00:12:53.133 "max_sessions": 128, 00:12:53.133 "max_connections_per_session": 2, 00:12:53.133 "max_queue_depth": 64, 00:12:53.133 "default_time2wait": 2, 00:12:53.133 "default_time2retain": 20, 00:12:53.133 "first_burst_length": 8192, 00:12:53.133 "immediate_data": true, 00:12:53.133 "allow_duplicated_isid": false, 00:12:53.133 "error_recovery_level": 0, 00:12:53.133 "nop_timeout": 60, 00:12:53.133 "nop_in_interval": 30, 00:12:53.133 "disable_chap": false, 00:12:53.133 "require_chap": false, 00:12:53.133 "mutual_chap": false, 00:12:53.133 "chap_group": 0, 00:12:53.133 "max_large_datain_per_connection": 64, 00:12:53.133 "max_r2t_per_connection": 4, 00:12:53.133 "pdu_pool_size": 36864, 00:12:53.133 "immediate_data_pool_size": 16384, 00:12:53.133 "data_out_pool_size": 2048 00:12:53.133 } 00:12:53.133 } 00:12:53.133 ] 00:12:53.133 } 00:12:53.133 ] 00:12:53.133 }' 00:12:53.133 [2024-12-16 13:17:07.517936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:53.133 [2024-12-16 13:17:07.518054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68899 ] 00:12:53.133 [2024-12-16 13:17:07.666608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.391 [2024-12-16 13:17:07.841756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.391 [2024-12-16 13:17:07.841999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.958 [2024-12-16 13:17:08.425226] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:53.958 [2024-12-16 13:17:08.432745] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:53.958 [2024-12-16 13:17:08.432798] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:53.958 [2024-12-16 13:17:08.432804] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:53.958 [2024-12-16 13:17:08.432809] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:53.958 [2024-12-16 13:17:08.441694] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:53.958 [2024-12-16 13:17:08.441712] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:53.958 [2024-12-16 13:17:08.448646] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:53.958 [2024-12-16 13:17:08.448712] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:53.958 [2024-12-16 13:17:08.465642] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:54.524 13:17:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.524 13:17:09 -- common/autotest_common.sh@862 -- # return 0 00:12:54.524 13:17:09 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:54.524 13:17:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.524 13:17:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.524 13:17:09 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:54.524 13:17:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.524 13:17:09 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:54.524 13:17:09 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:54.524 13:17:09 -- ublk/ublk.sh@125 -- # killprocess 68899 00:12:54.524 13:17:09 -- common/autotest_common.sh@936 -- # '[' -z 68899 ']' 00:12:54.524 13:17:09 -- common/autotest_common.sh@940 -- # kill -0 68899 00:12:54.524 13:17:09 -- common/autotest_common.sh@941 -- # uname 00:12:54.524 13:17:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:54.524 13:17:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68899 00:12:54.524 13:17:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:54.524 killing process with pid 68899 00:12:54.524 13:17:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:54.524 13:17:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68899' 00:12:54.524 13:17:09 -- common/autotest_common.sh@955 -- # kill 68899 00:12:54.524 13:17:09 -- common/autotest_common.sh@960 -- # wait 68899 00:12:55.458 [2024-12-16 13:17:09.809582] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:55.458 [2024-12-16 13:17:09.858653] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:55.458 [2024-12-16 13:17:09.858750] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:55.458 [2024-12-16 13:17:09.867654] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:55.458 [2024-12-16 13:17:09.867692] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:55.458 [2024-12-16 13:17:09.867697] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:55.458 [2024-12-16 13:17:09.867718] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:55.458 [2024-12-16 13:17:09.867825] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:56.834 13:17:11 -- ublk/ublk.sh@126 -- # trap - EXIT 00:12:56.834 00:12:56.834 real 0m8.436s 00:12:56.834 user 0m6.053s 00:12:56.834 sys 0m3.326s 00:12:56.834 ************************************ 00:12:56.834 END TEST test_save_ublk_config 00:12:56.834 ************************************ 00:12:56.834 13:17:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:56.834 13:17:11 -- common/autotest_common.sh@10 -- # set +x 00:12:56.834 13:17:11 -- ublk/ublk.sh@139 -- # spdk_pid=68981 00:12:56.834 13:17:11 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.834 13:17:11 -- ublk/ublk.sh@141 -- # waitforlisten 68981 00:12:56.834 13:17:11 -- common/autotest_common.sh@829 -- # '[' -z 68981 ']' 00:12:56.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.835 13:17:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.835 13:17:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.835 13:17:11 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:56.835 13:17:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.835 13:17:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.835 13:17:11 -- common/autotest_common.sh@10 -- # set +x 00:12:56.835 [2024-12-16 13:17:11.334080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:56.835 [2024-12-16 13:17:11.334539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68981 ] 00:12:57.093 [2024-12-16 13:17:11.483121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.093 [2024-12-16 13:17:11.634049] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:57.093 [2024-12-16 13:17:11.634424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.093 [2024-12-16 13:17:11.634430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.660 13:17:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.660 13:17:12 -- common/autotest_common.sh@862 -- # return 0 00:12:57.660 13:17:12 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:12:57.660 13:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:57.660 13:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.660 13:17:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.660 ************************************ 00:12:57.660 START TEST test_create_ublk 00:12:57.660 ************************************ 00:12:57.660 13:17:12 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:12:57.660 13:17:12 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:12:57.660 13:17:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.660 13:17:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.660 [2024-12-16 13:17:12.162118] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:57.660 13:17:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.660 13:17:12 -- ublk/ublk.sh@33 -- # ublk_target= 00:12:57.660 13:17:12 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:12:57.660 13:17:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.660 13:17:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.918 13:17:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.918 13:17:12 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:12:57.918 13:17:12 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:57.918 13:17:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.918 13:17:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.918 [2024-12-16 13:17:12.312748] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:57.918 [2024-12-16 13:17:12.313046] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:57.918 [2024-12-16 13:17:12.313053] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:57.918 [2024-12-16 13:17:12.313060] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:57.918 [2024-12-16 13:17:12.321811] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:57.918 [2024-12-16 13:17:12.321830] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:57.918 [2024-12-16 13:17:12.328650] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:57.918 [2024-12-16 13:17:12.336802] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:57.918 [2024-12-16 13:17:12.352656] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:57.918 13:17:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.918 13:17:12 -- ublk/ublk.sh@37 -- # ublk_id=0 00:12:57.918 13:17:12 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:12:57.918 13:17:12 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:12:57.918 13:17:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.918 13:17:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.918 13:17:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.918 13:17:12 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:12:57.918 { 00:12:57.918 "ublk_device": "/dev/ublkb0", 00:12:57.919 "id": 0, 00:12:57.919 "queue_depth": 512, 00:12:57.919 "num_queues": 4, 00:12:57.919 "bdev_name": "Malloc0" 00:12:57.919 } 00:12:57.919 ]' 00:12:57.919 13:17:12 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:12:57.919 13:17:12 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:57.919 13:17:12 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:12:57.919 13:17:12 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:12:57.919 13:17:12 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:12:57.919 13:17:12 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:12:57.919 13:17:12 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:12:58.177 13:17:12 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:12:58.177 13:17:12 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:12:58.177 13:17:12 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:58.177 13:17:12 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:12:58.177 13:17:12 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:12:58.177 13:17:12 -- lvol/common.sh@41 -- # local offset=0 00:12:58.177 13:17:12 -- lvol/common.sh@42 -- # local size=134217728 00:12:58.177 13:17:12 -- lvol/common.sh@43 -- # local rw=write 00:12:58.177 13:17:12 -- lvol/common.sh@44 -- # local pattern=0xcc 00:12:58.177 13:17:12 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:12:58.177 13:17:12 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:58.177 13:17:12 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:12:58.177 13:17:12 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:58.177 13:17:12 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:58.177 13:17:12 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:12:58.177 fio: verification read phase will never start because write phase uses all of runtime 00:12:58.177 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:58.177 fio-3.35 00:12:58.177 Starting 1 process 00:13:08.222 00:13:08.222 fio_test: (groupid=0, jobs=1): err= 0: pid=69022: Mon Dec 16 13:17:22 2024 00:13:08.222 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(597MiB/10001msec); 0 zone resets 00:13:08.222 clat (usec): min=36, max=3813, avg=64.71, stdev=95.35 00:13:08.222 lat (usec): min=37, max=3813, avg=65.11, stdev=95.36 00:13:08.222 clat percentiles (usec): 00:13:08.222 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 57], 00:13:08.222 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 63], 00:13:08.222 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 73], 00:13:08.222 | 99.00th=[ 83], 99.50th=[ 90], 99.90th=[ 1876], 99.95th=[ 2835], 00:13:08.222 | 99.99th=[ 3490] 00:13:08.222 bw ( KiB/s): min=56152, max=72360, per=100.00%, avg=61226.11, stdev=3187.33, samples=19 00:13:08.222 iops : min=14038, max=18090, avg=15306.53, stdev=796.83, samples=19 00:13:08.222 lat (usec) : 50=4.73%, 100=94.90%, 250=0.17%, 500=0.03%, 750=0.01% 00:13:08.222 lat (usec) : 1000=0.01% 00:13:08.222 lat (msec) : 2=0.06%, 4=0.09% 00:13:08.222 cpu : usr=1.96%, sys=13.83%, ctx=152867, majf=0, minf=795 00:13:08.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.222 issued rwts: total=0,152856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.222 00:13:08.222 Run status group 0 (all jobs): 00:13:08.222 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=597MiB (626MB), run=10001-10001msec 00:13:08.222 00:13:08.222 Disk stats (read/write): 00:13:08.222 ublkb0: ios=0/151277, merge=0/0, ticks=0/8216, in_queue=8217, util=99.10% 00:13:08.222 13:17:22 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:08.222 13:17:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.222 13:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.222 [2024-12-16 13:17:22.772257] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:08.481 [2024-12-16 13:17:22.806685] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:08.481 [2024-12-16 13:17:22.807360] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:08.481 [2024-12-16 13:17:22.817682] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:08.481 [2024-12-16 13:17:22.817929] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:08.481 [2024-12-16 13:17:22.817938] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:08.481 13:17:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.481 13:17:22 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:08.481 13:17:22 -- common/autotest_common.sh@650 -- # local es=0 00:13:08.481 13:17:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:08.481 13:17:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:08.481 13:17:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.481 13:17:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:08.481 13:17:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.481 13:17:22 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:08.481 13:17:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.481 13:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 [2024-12-16 13:17:22.832720] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:08.481 request: 00:13:08.481 { 00:13:08.481 "ublk_id": 0, 00:13:08.481 "method": "ublk_stop_disk", 00:13:08.481 "req_id": 1 00:13:08.481 } 00:13:08.481 Got JSON-RPC error response 00:13:08.481 response: 00:13:08.481 { 00:13:08.481 "code": -19, 00:13:08.481 "message": "No such device" 00:13:08.481 } 00:13:08.481 13:17:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:08.481 13:17:22 -- common/autotest_common.sh@653 -- # es=1 00:13:08.481 13:17:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.481 13:17:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.481 13:17:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.481 13:17:22 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:08.481 13:17:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.481 13:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 [2024-12-16 13:17:22.848683] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:08.481 [2024-12-16 13:17:22.856643] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:08.481 [2024-12-16 13:17:22.856669] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:08.481 13:17:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.481 13:17:22 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:08.481 13:17:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.481 13:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.740 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.740 13:17:23 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:08.740 13:17:23 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:08.740 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.740 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.740 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.740 13:17:23 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:08.740 13:17:23 -- lvol/common.sh@26 -- # jq length 00:13:08.740 13:17:23 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:08.740 13:17:23 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:08.740 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.740 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.740 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.740 13:17:23 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:08.740 13:17:23 -- lvol/common.sh@28 -- # jq length 00:13:08.740 13:17:23 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:08.740 00:13:08.740 real 0m11.156s 00:13:08.740 user 0m0.503s 00:13:08.740 sys 0m1.448s 00:13:08.740 ************************************ 00:13:08.740 END TEST test_create_ublk 00:13:08.740 ************************************ 00:13:08.740 13:17:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:08.740 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.998 13:17:23 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:08.998 13:17:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:08.998 13:17:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.998 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.998 ************************************ 00:13:08.998 START TEST test_create_multi_ublk 00:13:08.998 ************************************ 00:13:08.998 13:17:23 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:08.998 13:17:23 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:08.998 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.998 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.998 [2024-12-16 13:17:23.362100] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:08.998 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.998 13:17:23 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:08.998 13:17:23 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:08.998 13:17:23 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:08.998 13:17:23 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:08.998 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.998 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.998 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.998 13:17:23 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:08.998 13:17:23 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:08.998 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.998 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.257 [2024-12-16 13:17:23.576748] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:09.257 [2024-12-16 13:17:23.577043] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:09.257 [2024-12-16 13:17:23.577049] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:09.257 [2024-12-16 13:17:23.577055] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.257 [2024-12-16 13:17:23.581998] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.257 [2024-12-16 13:17:23.582020] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.257 [2024-12-16 13:17:23.599647] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.257 [2024-12-16 13:17:23.600138] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:09.257 [2024-12-16 13:17:23.614690] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:09.257 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.257 13:17:23 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:09.257 13:17:23 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:09.257 13:17:23 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:09.257 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.257 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.516 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.516 13:17:23 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:09.516 13:17:23 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:09.516 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.516 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.516 [2024-12-16 13:17:23.853729] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:09.516 [2024-12-16 13:17:23.854024] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:09.516 [2024-12-16 13:17:23.854038] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:09.516 [2024-12-16 13:17:23.854043] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.516 [2024-12-16 13:17:23.865678] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.516 [2024-12-16 13:17:23.865694] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.516 [2024-12-16 13:17:23.877646] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.516 [2024-12-16 13:17:23.878129] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:09.516 [2024-12-16 13:17:23.890670] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:09.516 13:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.516 13:17:23 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:09.516 13:17:23 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:09.516 13:17:23 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:09.516 13:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.516 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.775 13:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.775 13:17:24 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:09.775 13:17:24 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:09.775 13:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.775 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:09.775 [2024-12-16 13:17:24.105751] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:09.775 [2024-12-16 13:17:24.106042] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:09.775 [2024-12-16 13:17:24.106052] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:09.775 [2024-12-16 13:17:24.106060] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.775 [2024-12-16 13:17:24.113671] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.775 [2024-12-16 13:17:24.113689] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.775 [2024-12-16 13:17:24.121655] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.775 [2024-12-16 13:17:24.122139] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:09.775 [2024-12-16 13:17:24.134651] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:09.775 13:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.775 13:17:24 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:09.775 13:17:24 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:09.775 13:17:24 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:09.775 13:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.775 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:09.775 13:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.775 13:17:24 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:09.775 13:17:24 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:09.775 13:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.775 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:09.775 [2024-12-16 13:17:24.293752] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:09.775 [2024-12-16 13:17:24.294043] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:09.775 [2024-12-16 13:17:24.294051] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:09.775 [2024-12-16 13:17:24.294056] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.775 [2024-12-16 13:17:24.301659] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.775 [2024-12-16 13:17:24.301675] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.775 [2024-12-16 13:17:24.309651] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.775 [2024-12-16 13:17:24.310133] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:09.775 [2024-12-16 13:17:24.326652] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:09.775 13:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.775 13:17:24 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:09.775 13:17:24 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:09.775 13:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.775 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.033 13:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:10.034 { 00:13:10.034 "ublk_device": "/dev/ublkb0", 00:13:10.034 "id": 0, 00:13:10.034 "queue_depth": 512, 00:13:10.034 "num_queues": 4, 00:13:10.034 "bdev_name": "Malloc0" 00:13:10.034 }, 00:13:10.034 { 00:13:10.034 "ublk_device": "/dev/ublkb1", 00:13:10.034 "id": 1, 00:13:10.034 "queue_depth": 512, 00:13:10.034 "num_queues": 4, 00:13:10.034 "bdev_name": "Malloc1" 00:13:10.034 }, 00:13:10.034 { 00:13:10.034 "ublk_device": "/dev/ublkb2", 00:13:10.034 "id": 2, 00:13:10.034 "queue_depth": 512, 00:13:10.034 "num_queues": 4, 00:13:10.034 "bdev_name": "Malloc2" 00:13:10.034 }, 00:13:10.034 { 00:13:10.034 "ublk_device": "/dev/ublkb3", 00:13:10.034 "id": 3, 00:13:10.034 "queue_depth": 512, 00:13:10.034 "num_queues": 4, 00:13:10.034 "bdev_name": "Malloc3" 00:13:10.034 } 00:13:10.034 ]' 00:13:10.034 13:17:24 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:10.034 13:17:24 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.034 13:17:24 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:10.034 13:17:24 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:10.034 13:17:24 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:10.034 13:17:24 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:10.034 13:17:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:10.034 13:17:24 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.034 13:17:24 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:10.034 13:17:24 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:10.034 13:17:24 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:10.034 13:17:24 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:10.034 13:17:24 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:10.292 13:17:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:10.292 13:17:24 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.292 13:17:24 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:10.292 13:17:24 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:10.292 13:17:24 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:10.292 13:17:24 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:10.292 13:17:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:10.292 13:17:24 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.292 13:17:24 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:10.292 13:17:24 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:10.292 13:17:24 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:10.292 13:17:24 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:10.551 13:17:24 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:10.551 13:17:24 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:10.551 13:17:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:10.551 13:17:24 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:10.551 13:17:24 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:10.551 13:17:24 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:10.551 13:17:24 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:10.551 13:17:24 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.551 13:17:24 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:10.551 13:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.551 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 [2024-12-16 13:17:24.957711] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:10.551 [2024-12-16 13:17:25.001190] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:10.551 [2024-12-16 13:17:25.002371] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:10.551 [2024-12-16 13:17:25.008651] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:10.551 [2024-12-16 13:17:25.008895] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:10.551 [2024-12-16 13:17:25.008904] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:10.551 13:17:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.551 13:17:25 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.551 13:17:25 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:10.551 13:17:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.551 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 [2024-12-16 13:17:25.024705] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:10.551 [2024-12-16 13:17:25.069687] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:10.551 [2024-12-16 13:17:25.070465] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:10.551 [2024-12-16 13:17:25.081678] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:10.551 [2024-12-16 13:17:25.081909] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:10.551 [2024-12-16 13:17:25.081917] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:10.551 13:17:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.551 13:17:25 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.551 13:17:25 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:10.551 13:17:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.551 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 [2024-12-16 13:17:25.090709] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:10.809 [2024-12-16 13:17:25.124692] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:10.809 [2024-12-16 13:17:25.125404] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:10.809 [2024-12-16 13:17:25.132654] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:10.809 [2024-12-16 13:17:25.132894] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:10.809 [2024-12-16 13:17:25.132907] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:10.809 13:17:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.809 13:17:25 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:10.809 13:17:25 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:10.809 13:17:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.809 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.809 [2024-12-16 13:17:25.148704] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:10.809 [2024-12-16 13:17:25.184133] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:10.809 [2024-12-16 13:17:25.185073] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:10.809 [2024-12-16 13:17:25.193677] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:10.809 [2024-12-16 13:17:25.193913] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:10.809 [2024-12-16 13:17:25.193920] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:10.809 13:17:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.809 13:17:25 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:10.809 [2024-12-16 13:17:25.376706] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:11.067 [2024-12-16 13:17:25.384857] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:11.067 [2024-12-16 13:17:25.384883] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:11.067 13:17:25 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:11.067 13:17:25 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:11.067 13:17:25 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:11.067 13:17:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.067 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.326 13:17:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.326 13:17:25 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:11.326 13:17:25 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:11.326 13:17:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.326 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.584 13:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.584 13:17:26 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:11.584 13:17:26 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:11.584 13:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.584 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.151 13:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.151 13:17:26 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:12.151 13:17:26 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:12.151 13:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.151 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.409 13:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.409 13:17:26 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:12.409 13:17:26 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:12.409 13:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.409 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.409 13:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.409 13:17:26 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:12.409 13:17:26 -- lvol/common.sh@26 -- # jq length 00:13:12.409 13:17:26 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:12.409 13:17:26 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:12.409 13:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.409 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.409 13:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.409 13:17:26 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:12.409 13:17:26 -- lvol/common.sh@28 -- # jq length 00:13:12.409 13:17:26 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:12.409 00:13:12.409 real 0m3.538s 00:13:12.409 user 0m0.778s 00:13:12.409 sys 0m0.141s 00:13:12.409 ************************************ 00:13:12.410 END TEST test_create_multi_ublk 00:13:12.410 ************************************ 00:13:12.410 13:17:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:12.410 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.410 13:17:26 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:12.410 13:17:26 -- ublk/ublk.sh@147 -- # cleanup 00:13:12.410 13:17:26 -- ublk/ublk.sh@130 -- # killprocess 68981 00:13:12.410 13:17:26 -- common/autotest_common.sh@936 -- # '[' -z 68981 ']' 00:13:12.410 13:17:26 -- common/autotest_common.sh@940 -- # kill -0 68981 00:13:12.410 13:17:26 -- common/autotest_common.sh@941 -- # uname 00:13:12.410 13:17:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.410 13:17:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68981 00:13:12.410 13:17:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:12.410 killing process with pid 68981 00:13:12.410 13:17:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:12.410 13:17:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68981' 00:13:12.410 13:17:26 -- common/autotest_common.sh@955 -- # kill 68981 00:13:12.410 13:17:26 -- common/autotest_common.sh@960 -- # wait 68981 00:13:12.977 [2024-12-16 13:17:27.470940] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:12.977 [2024-12-16 13:17:27.470986] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:13.544 00:13:13.544 real 0m25.506s 00:13:13.544 user 0m35.530s 00:13:13.544 sys 0m10.165s 00:13:13.544 13:17:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.544 ************************************ 00:13:13.544 END TEST ublk 00:13:13.544 ************************************ 00:13:13.544 13:17:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.805 13:17:28 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:13.805 13:17:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:13.805 13:17:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.805 13:17:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.805 ************************************ 00:13:13.805 START TEST ublk_recovery 00:13:13.805 ************************************ 00:13:13.805 13:17:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:13.805 * Looking for test storage... 00:13:13.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:13.805 13:17:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:13.805 13:17:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:13.805 13:17:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:13.805 13:17:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:13.805 13:17:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:13.805 13:17:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:13.805 13:17:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:13.805 13:17:28 -- scripts/common.sh@335 -- # IFS=.-: 00:13:13.805 13:17:28 -- scripts/common.sh@335 -- # read -ra ver1 00:13:13.805 13:17:28 -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.805 13:17:28 -- scripts/common.sh@336 -- # read -ra ver2 00:13:13.805 13:17:28 -- scripts/common.sh@337 -- # local 'op=<' 00:13:13.805 13:17:28 -- scripts/common.sh@339 -- # ver1_l=2 00:13:13.805 13:17:28 -- scripts/common.sh@340 -- # ver2_l=1 00:13:13.805 13:17:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:13.805 13:17:28 -- scripts/common.sh@343 -- # case "$op" in 00:13:13.805 13:17:28 -- scripts/common.sh@344 -- # : 1 00:13:13.805 13:17:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:13.805 13:17:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.805 13:17:28 -- scripts/common.sh@364 -- # decimal 1 00:13:13.805 13:17:28 -- scripts/common.sh@352 -- # local d=1 00:13:13.805 13:17:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.805 13:17:28 -- scripts/common.sh@354 -- # echo 1 00:13:13.805 13:17:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:13.805 13:17:28 -- scripts/common.sh@365 -- # decimal 2 00:13:13.805 13:17:28 -- scripts/common.sh@352 -- # local d=2 00:13:13.805 13:17:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.805 13:17:28 -- scripts/common.sh@354 -- # echo 2 00:13:13.805 13:17:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:13.805 13:17:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:13.805 13:17:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:13.805 13:17:28 -- scripts/common.sh@367 -- # return 0 00:13:13.805 13:17:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.805 13:17:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:13.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.805 --rc genhtml_branch_coverage=1 00:13:13.805 --rc genhtml_function_coverage=1 00:13:13.805 --rc genhtml_legend=1 00:13:13.805 --rc geninfo_all_blocks=1 00:13:13.805 --rc geninfo_unexecuted_blocks=1 00:13:13.805 00:13:13.805 ' 00:13:13.805 13:17:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:13.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.805 --rc genhtml_branch_coverage=1 00:13:13.805 --rc genhtml_function_coverage=1 00:13:13.805 --rc genhtml_legend=1 00:13:13.805 --rc geninfo_all_blocks=1 00:13:13.805 --rc geninfo_unexecuted_blocks=1 00:13:13.805 00:13:13.805 ' 00:13:13.805 13:17:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:13.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.805 --rc genhtml_branch_coverage=1 00:13:13.805 --rc genhtml_function_coverage=1 00:13:13.805 --rc genhtml_legend=1 00:13:13.805 --rc geninfo_all_blocks=1 00:13:13.805 --rc geninfo_unexecuted_blocks=1 00:13:13.805 00:13:13.805 ' 00:13:13.805 13:17:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:13.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.805 --rc genhtml_branch_coverage=1 00:13:13.805 --rc genhtml_function_coverage=1 00:13:13.805 --rc genhtml_legend=1 00:13:13.805 --rc geninfo_all_blocks=1 00:13:13.805 --rc geninfo_unexecuted_blocks=1 00:13:13.805 00:13:13.805 ' 00:13:13.805 13:17:28 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:13.805 13:17:28 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:13.805 13:17:28 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:13.805 13:17:28 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:13.805 13:17:28 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:13.805 13:17:28 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:13.805 13:17:28 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:13.805 13:17:28 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:13.805 13:17:28 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:13.805 13:17:28 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:13.805 13:17:28 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69372 00:13:13.805 13:17:28 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:13.805 13:17:28 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:13.805 13:17:28 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69372 00:13:13.805 13:17:28 -- common/autotest_common.sh@829 -- # '[' -z 69372 ']' 00:13:13.805 13:17:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.805 13:17:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.805 13:17:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.805 13:17:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.805 13:17:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.805 [2024-12-16 13:17:28.370391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:13.805 [2024-12-16 13:17:28.370909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69372 ] 00:13:14.065 [2024-12-16 13:17:28.515035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.324 [2024-12-16 13:17:28.655412] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.324 [2024-12-16 13:17:28.656000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.324 [2024-12-16 13:17:28.656098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.891 13:17:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.891 13:17:29 -- common/autotest_common.sh@862 -- # return 0 00:13:14.891 13:17:29 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:14.891 13:17:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.891 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.891 [2024-12-16 13:17:29.192125] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:14.891 13:17:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.891 13:17:29 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:14.891 13:17:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.891 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.891 malloc0 00:13:14.891 13:17:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.891 13:17:29 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:14.891 13:17:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.891 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.891 [2024-12-16 13:17:29.283749] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:14.891 [2024-12-16 13:17:29.283832] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:14.891 [2024-12-16 13:17:29.283839] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:14.891 [2024-12-16 13:17:29.283846] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:14.891 [2024-12-16 13:17:29.291659] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:14.891 [2024-12-16 13:17:29.291679] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:14.891 [2024-12-16 13:17:29.299653] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:14.891 [2024-12-16 13:17:29.299767] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:14.891 [2024-12-16 13:17:29.309720] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:14.891 1 00:13:14.891 13:17:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.891 13:17:29 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:15.824 13:17:30 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69407 00:13:15.824 13:17:30 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:15.824 13:17:30 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:16.083 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:16.083 fio-3.35 00:13:16.083 Starting 1 process 00:13:21.352 13:17:35 -- ublk/ublk_recovery.sh@36 -- # kill -9 69372 00:13:21.352 13:17:35 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:26.641 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69372 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:26.641 13:17:40 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69516 00:13:26.641 13:17:40 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.641 13:17:40 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69516 00:13:26.641 13:17:40 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:26.641 13:17:40 -- common/autotest_common.sh@829 -- # '[' -z 69516 ']' 00:13:26.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.641 13:17:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.641 13:17:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.641 13:17:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.641 13:17:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.641 13:17:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.641 [2024-12-16 13:17:40.408392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:26.641 [2024-12-16 13:17:40.408495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69516 ] 00:13:26.641 [2024-12-16 13:17:40.555179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:26.641 [2024-12-16 13:17:40.754362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.641 [2024-12-16 13:17:40.754653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.641 [2024-12-16 13:17:40.754767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.586 13:17:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.586 13:17:41 -- common/autotest_common.sh@862 -- # return 0 00:13:27.586 13:17:41 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:27.586 13:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.586 13:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.586 [2024-12-16 13:17:41.899496] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:27.586 13:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.586 13:17:41 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:27.586 13:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.586 13:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.586 malloc0 00:13:27.586 13:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.586 13:17:41 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:27.586 13:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.586 13:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.586 [2024-12-16 13:17:42.002799] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:27.586 [2024-12-16 13:17:42.002914] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:27.586 [2024-12-16 13:17:42.002942] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:27.586 1 00:13:27.586 [2024-12-16 13:17:42.011691] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:27.586 [2024-12-16 13:17:42.011710] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:27.586 [2024-12-16 13:17:42.011788] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:27.586 13:17:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.586 13:17:42 -- ublk/ublk_recovery.sh@52 -- # wait 69407 00:13:54.133 [2024-12-16 13:18:05.718647] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:54.133 [2024-12-16 13:18:05.725760] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:54.133 [2024-12-16 13:18:05.733846] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:54.133 [2024-12-16 13:18:05.733929] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:16.080 00:14:16.080 fio_test: (groupid=0, jobs=1): err= 0: pid=69410: Mon Dec 16 13:18:30 2024 00:14:16.080 read: IOPS=14.8k, BW=57.8MiB/s (60.6MB/s)(3470MiB/60001msec) 00:14:16.080 slat (nsec): min=1141, max=2009.0k, avg=4915.56, stdev=2561.85 00:14:16.080 clat (usec): min=935, max=30424k, avg=3871.39, stdev=232757.33 00:14:16.080 lat (usec): min=940, max=30424k, avg=3876.31, stdev=232757.33 00:14:16.080 clat percentiles (usec): 00:14:16.080 | 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1893], 00:14:16.080 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1942], 00:14:16.080 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2409], 95.00th=[ 3064], 00:14:16.080 | 99.00th=[ 5145], 99.50th=[ 5473], 99.90th=[ 7046], 99.95th=[ 7767], 00:14:16.080 | 99.99th=[12780] 00:14:16.080 bw ( KiB/s): min=38224, max=126752, per=100.00%, avg=118559.59, stdev=16091.53, samples=59 00:14:16.080 iops : min= 9556, max=31688, avg=29639.90, stdev=4022.88, samples=59 00:14:16.080 write: IOPS=14.8k, BW=57.7MiB/s (60.6MB/s)(3465MiB/60001msec); 0 zone resets 00:14:16.080 slat (nsec): min=1096, max=416605, avg=4941.56, stdev=1565.11 00:14:16.080 clat (usec): min=958, max=30424k, avg=4770.72, stdev=281581.29 00:14:16.080 lat (usec): min=964, max=30424k, avg=4775.66, stdev=281581.28 00:14:16.080 clat percentiles (usec): 00:14:16.080 | 1.00th=[ 1795], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:14:16.080 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:14:16.080 | 70.00th=[ 2057], 80.00th=[ 2089], 90.00th=[ 2474], 95.00th=[ 2966], 00:14:16.080 | 99.00th=[ 5145], 99.50th=[ 5538], 99.90th=[ 7111], 99.95th=[ 7832], 00:14:16.080 | 99.99th=[12780] 00:14:16.080 bw ( KiB/s): min=38080, max=126504, per=100.00%, avg=118373.83, stdev=16225.43, samples=59 00:14:16.080 iops : min= 9520, max=31626, avg=29593.46, stdev=4056.36, samples=59 00:14:16.080 lat (usec) : 1000=0.01% 00:14:16.080 lat (msec) : 2=57.21%, 4=40.13%, 10=2.64%, 20=0.01%, >=2000=0.01% 00:14:16.080 cpu : usr=3.54%, sys=14.85%, ctx=59003, majf=0, minf=13 00:14:16.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:16.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.080 issued rwts: total=888248,887021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.080 00:14:16.080 Run status group 0 (all jobs): 00:14:16.080 READ: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=3470MiB (3638MB), run=60001-60001msec 00:14:16.080 WRITE: bw=57.7MiB/s (60.6MB/s), 57.7MiB/s-57.7MiB/s (60.6MB/s-60.6MB/s), io=3465MiB (3633MB), run=60001-60001msec 00:14:16.080 00:14:16.080 Disk stats (read/write): 00:14:16.080 ublkb1: ios=884993/883598, merge=0/0, ticks=3382196/4102863, in_queue=7485059, util=99.92% 00:14:16.080 13:18:30 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:16.080 13:18:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.080 13:18:30 -- common/autotest_common.sh@10 -- # set +x 00:14:16.080 [2024-12-16 13:18:30.567700] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.080 [2024-12-16 13:18:30.604668] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.080 [2024-12-16 13:18:30.604902] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.080 [2024-12-16 13:18:30.612657] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.080 [2024-12-16 13:18:30.612807] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:16.080 [2024-12-16 13:18:30.612831] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:16.080 13:18:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.080 13:18:30 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:16.080 13:18:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.080 13:18:30 -- common/autotest_common.sh@10 -- # set +x 00:14:16.080 [2024-12-16 13:18:30.628714] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:16.080 [2024-12-16 13:18:30.636646] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:16.080 [2024-12-16 13:18:30.636711] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:16.080 13:18:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.080 13:18:30 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:16.080 13:18:30 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:16.080 13:18:30 -- ublk/ublk_recovery.sh@14 -- # killprocess 69516 00:14:16.080 13:18:30 -- common/autotest_common.sh@936 -- # '[' -z 69516 ']' 00:14:16.080 13:18:30 -- common/autotest_common.sh@940 -- # kill -0 69516 00:14:16.080 13:18:30 -- common/autotest_common.sh@941 -- # uname 00:14:16.080 13:18:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.080 13:18:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69516 00:14:16.339 13:18:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:16.339 13:18:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:16.339 13:18:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69516' 00:14:16.339 killing process with pid 69516 00:14:16.339 13:18:30 -- common/autotest_common.sh@955 -- # kill 69516 00:14:16.339 13:18:30 -- common/autotest_common.sh@960 -- # wait 69516 00:14:17.275 [2024-12-16 13:18:31.754210] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:17.275 [2024-12-16 13:18:31.754387] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:18.213 00:14:18.213 real 1m4.299s 00:14:18.213 user 1m47.830s 00:14:18.213 sys 0m21.056s 00:14:18.213 13:18:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:18.213 13:18:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.213 ************************************ 00:14:18.213 END TEST ublk_recovery 00:14:18.214 ************************************ 00:14:18.214 13:18:32 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:18.214 13:18:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.214 13:18:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.214 13:18:32 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:18.214 13:18:32 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:18.214 13:18:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.214 13:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.214 13:18:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.214 ************************************ 00:14:18.214 START TEST ftl 00:14:18.214 ************************************ 00:14:18.214 13:18:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:18.214 * Looking for test storage... 00:14:18.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:18.214 13:18:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:18.214 13:18:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:18.214 13:18:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:18.214 13:18:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:18.214 13:18:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:18.214 13:18:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:18.214 13:18:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:18.214 13:18:32 -- scripts/common.sh@335 -- # IFS=.-: 00:14:18.214 13:18:32 -- scripts/common.sh@335 -- # read -ra ver1 00:14:18.214 13:18:32 -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.214 13:18:32 -- scripts/common.sh@336 -- # read -ra ver2 00:14:18.214 13:18:32 -- scripts/common.sh@337 -- # local 'op=<' 00:14:18.214 13:18:32 -- scripts/common.sh@339 -- # ver1_l=2 00:14:18.214 13:18:32 -- scripts/common.sh@340 -- # ver2_l=1 00:14:18.214 13:18:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:18.214 13:18:32 -- scripts/common.sh@343 -- # case "$op" in 00:14:18.214 13:18:32 -- scripts/common.sh@344 -- # : 1 00:14:18.214 13:18:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:18.214 13:18:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.214 13:18:32 -- scripts/common.sh@364 -- # decimal 1 00:14:18.214 13:18:32 -- scripts/common.sh@352 -- # local d=1 00:14:18.214 13:18:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.214 13:18:32 -- scripts/common.sh@354 -- # echo 1 00:14:18.214 13:18:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:18.214 13:18:32 -- scripts/common.sh@365 -- # decimal 2 00:14:18.214 13:18:32 -- scripts/common.sh@352 -- # local d=2 00:14:18.214 13:18:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.214 13:18:32 -- scripts/common.sh@354 -- # echo 2 00:14:18.214 13:18:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:18.214 13:18:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:18.214 13:18:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:18.214 13:18:32 -- scripts/common.sh@367 -- # return 0 00:14:18.214 13:18:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.214 13:18:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.214 --rc genhtml_branch_coverage=1 00:14:18.214 --rc genhtml_function_coverage=1 00:14:18.214 --rc genhtml_legend=1 00:14:18.214 --rc geninfo_all_blocks=1 00:14:18.214 --rc geninfo_unexecuted_blocks=1 00:14:18.214 00:14:18.214 ' 00:14:18.214 13:18:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.214 --rc genhtml_branch_coverage=1 00:14:18.214 --rc genhtml_function_coverage=1 00:14:18.214 --rc genhtml_legend=1 00:14:18.214 --rc geninfo_all_blocks=1 00:14:18.214 --rc geninfo_unexecuted_blocks=1 00:14:18.214 00:14:18.214 ' 00:14:18.214 13:18:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.214 --rc genhtml_branch_coverage=1 00:14:18.214 --rc genhtml_function_coverage=1 00:14:18.214 --rc genhtml_legend=1 00:14:18.214 --rc geninfo_all_blocks=1 00:14:18.214 --rc geninfo_unexecuted_blocks=1 00:14:18.214 00:14:18.214 ' 00:14:18.214 13:18:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.214 --rc genhtml_branch_coverage=1 00:14:18.214 --rc genhtml_function_coverage=1 00:14:18.214 --rc genhtml_legend=1 00:14:18.214 --rc geninfo_all_blocks=1 00:14:18.214 --rc geninfo_unexecuted_blocks=1 00:14:18.214 00:14:18.214 ' 00:14:18.214 13:18:32 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:18.214 13:18:32 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:18.214 13:18:32 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:18.214 13:18:32 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:18.214 13:18:32 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:18.214 13:18:32 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:18.214 13:18:32 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:18.214 13:18:32 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:18.214 13:18:32 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:18.214 13:18:32 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.214 13:18:32 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.214 13:18:32 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:18.214 13:18:32 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:18.214 13:18:32 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:18.214 13:18:32 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:18.214 13:18:32 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:18.214 13:18:32 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:18.214 13:18:32 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.214 13:18:32 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.214 13:18:32 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:18.214 13:18:32 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:18.214 13:18:32 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:18.214 13:18:32 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:18.214 13:18:32 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:18.214 13:18:32 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:18.214 13:18:32 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:18.214 13:18:32 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:18.214 13:18:32 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:18.214 13:18:32 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:18.214 13:18:32 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:18.214 13:18:32 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:18.214 13:18:32 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:18.214 13:18:32 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:18.214 13:18:32 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:18.214 13:18:32 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:18.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:18.788 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.788 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.788 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.788 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.788 13:18:33 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70327 00:14:18.788 13:18:33 -- ftl/ftl.sh@38 -- # waitforlisten 70327 00:14:18.788 13:18:33 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:18.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.788 13:18:33 -- common/autotest_common.sh@829 -- # '[' -z 70327 ']' 00:14:18.788 13:18:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.788 13:18:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.788 13:18:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.788 13:18:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.788 13:18:33 -- common/autotest_common.sh@10 -- # set +x 00:14:18.788 [2024-12-16 13:18:33.294599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:18.788 [2024-12-16 13:18:33.294757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70327 ] 00:14:19.049 [2024-12-16 13:18:33.448878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.311 [2024-12-16 13:18:33.676991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.311 [2024-12-16 13:18:33.677211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.573 13:18:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.573 13:18:34 -- common/autotest_common.sh@862 -- # return 0 00:14:19.573 13:18:34 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:19.834 13:18:34 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:20.778 13:18:35 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:20.778 13:18:35 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:21.039 13:18:35 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:21.039 13:18:35 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:21.039 13:18:35 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:21.301 13:18:35 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:21.301 13:18:35 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:21.301 13:18:35 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:21.301 13:18:35 -- ftl/ftl.sh@50 -- # break 00:14:21.301 13:18:35 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:21.301 13:18:35 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:21.301 13:18:35 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:21.301 13:18:35 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:21.562 13:18:35 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:21.562 13:18:35 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:21.562 13:18:35 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:21.562 13:18:35 -- ftl/ftl.sh@63 -- # break 00:14:21.562 13:18:35 -- ftl/ftl.sh@66 -- # killprocess 70327 00:14:21.562 13:18:35 -- common/autotest_common.sh@936 -- # '[' -z 70327 ']' 00:14:21.562 13:18:35 -- common/autotest_common.sh@940 -- # kill -0 70327 00:14:21.562 13:18:35 -- common/autotest_common.sh@941 -- # uname 00:14:21.562 13:18:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.562 13:18:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70327 00:14:21.562 13:18:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:21.562 13:18:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:21.562 killing process with pid 70327 00:14:21.562 13:18:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70327' 00:14:21.562 13:18:35 -- common/autotest_common.sh@955 -- # kill 70327 00:14:21.562 13:18:35 -- common/autotest_common.sh@960 -- # wait 70327 00:14:22.947 13:18:37 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:22.947 13:18:37 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:22.947 13:18:37 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:22.947 13:18:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:22.947 13:18:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.947 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:14:22.947 ************************************ 00:14:22.947 START TEST ftl_fio_basic 00:14:22.947 ************************************ 00:14:22.947 13:18:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:22.947 * Looking for test storage... 00:14:22.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.947 13:18:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:22.947 13:18:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:22.947 13:18:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:23.208 13:18:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:23.208 13:18:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:23.208 13:18:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:23.208 13:18:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:23.208 13:18:37 -- scripts/common.sh@335 -- # IFS=.-: 00:14:23.208 13:18:37 -- scripts/common.sh@335 -- # read -ra ver1 00:14:23.208 13:18:37 -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.208 13:18:37 -- scripts/common.sh@336 -- # read -ra ver2 00:14:23.208 13:18:37 -- scripts/common.sh@337 -- # local 'op=<' 00:14:23.208 13:18:37 -- scripts/common.sh@339 -- # ver1_l=2 00:14:23.208 13:18:37 -- scripts/common.sh@340 -- # ver2_l=1 00:14:23.208 13:18:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:23.208 13:18:37 -- scripts/common.sh@343 -- # case "$op" in 00:14:23.208 13:18:37 -- scripts/common.sh@344 -- # : 1 00:14:23.208 13:18:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:23.208 13:18:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.208 13:18:37 -- scripts/common.sh@364 -- # decimal 1 00:14:23.208 13:18:37 -- scripts/common.sh@352 -- # local d=1 00:14:23.208 13:18:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.208 13:18:37 -- scripts/common.sh@354 -- # echo 1 00:14:23.208 13:18:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:23.208 13:18:37 -- scripts/common.sh@365 -- # decimal 2 00:14:23.208 13:18:37 -- scripts/common.sh@352 -- # local d=2 00:14:23.208 13:18:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.208 13:18:37 -- scripts/common.sh@354 -- # echo 2 00:14:23.208 13:18:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:23.208 13:18:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:23.208 13:18:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:23.208 13:18:37 -- scripts/common.sh@367 -- # return 0 00:14:23.208 13:18:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.208 13:18:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.208 --rc genhtml_branch_coverage=1 00:14:23.208 --rc genhtml_function_coverage=1 00:14:23.208 --rc genhtml_legend=1 00:14:23.208 --rc geninfo_all_blocks=1 00:14:23.208 --rc geninfo_unexecuted_blocks=1 00:14:23.208 00:14:23.208 ' 00:14:23.208 13:18:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.208 --rc genhtml_branch_coverage=1 00:14:23.208 --rc genhtml_function_coverage=1 00:14:23.208 --rc genhtml_legend=1 00:14:23.208 --rc geninfo_all_blocks=1 00:14:23.208 --rc geninfo_unexecuted_blocks=1 00:14:23.208 00:14:23.208 ' 00:14:23.208 13:18:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.208 --rc genhtml_branch_coverage=1 00:14:23.208 --rc genhtml_function_coverage=1 00:14:23.208 --rc genhtml_legend=1 00:14:23.208 --rc geninfo_all_blocks=1 00:14:23.208 --rc geninfo_unexecuted_blocks=1 00:14:23.208 00:14:23.208 ' 00:14:23.208 13:18:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.208 --rc genhtml_branch_coverage=1 00:14:23.208 --rc genhtml_function_coverage=1 00:14:23.208 --rc genhtml_legend=1 00:14:23.208 --rc geninfo_all_blocks=1 00:14:23.208 --rc geninfo_unexecuted_blocks=1 00:14:23.208 00:14:23.208 ' 00:14:23.208 13:18:37 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:23.208 13:18:37 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:23.208 13:18:37 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:23.208 13:18:37 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:23.208 13:18:37 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:23.208 13:18:37 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:23.208 13:18:37 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.208 13:18:37 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:23.208 13:18:37 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:23.208 13:18:37 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:23.208 13:18:37 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:23.208 13:18:37 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:23.208 13:18:37 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:23.208 13:18:37 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:23.208 13:18:37 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:23.208 13:18:37 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:23.208 13:18:37 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:23.208 13:18:37 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:23.208 13:18:37 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:23.208 13:18:37 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:23.208 13:18:37 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:23.208 13:18:37 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:23.208 13:18:37 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:23.208 13:18:37 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:23.208 13:18:37 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:23.208 13:18:37 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:23.208 13:18:37 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:23.208 13:18:37 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:23.208 13:18:37 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:23.208 13:18:37 -- ftl/fio.sh@11 -- # declare -A suite 00:14:23.208 13:18:37 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:23.209 13:18:37 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:23.209 13:18:37 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:23.209 13:18:37 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.209 13:18:37 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:23.209 13:18:37 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:23.209 13:18:37 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:23.209 13:18:37 -- ftl/fio.sh@26 -- # uuid= 00:14:23.209 13:18:37 -- ftl/fio.sh@27 -- # timeout=240 00:14:23.209 13:18:37 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:23.209 13:18:37 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:23.209 13:18:37 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:23.209 13:18:37 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:23.209 13:18:37 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:23.209 13:18:37 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:23.209 13:18:37 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:23.209 13:18:37 -- ftl/fio.sh@45 -- # svcpid=70465 00:14:23.209 13:18:37 -- ftl/fio.sh@46 -- # waitforlisten 70465 00:14:23.209 13:18:37 -- common/autotest_common.sh@829 -- # '[' -z 70465 ']' 00:14:23.209 13:18:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.209 13:18:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.209 13:18:37 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:23.209 13:18:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.209 13:18:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.209 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:14:23.209 [2024-12-16 13:18:37.666347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:23.209 [2024-12-16 13:18:37.666486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:14:23.468 [2024-12-16 13:18:37.819427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:23.468 [2024-12-16 13:18:37.959006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:23.468 [2024-12-16 13:18:37.959442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.468 [2024-12-16 13:18:37.959908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.468 [2024-12-16 13:18:37.959934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.035 13:18:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.035 13:18:38 -- common/autotest_common.sh@862 -- # return 0 00:14:24.035 13:18:38 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:24.035 13:18:38 -- ftl/common.sh@54 -- # local name=nvme0 00:14:24.035 13:18:38 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:24.035 13:18:38 -- ftl/common.sh@56 -- # local size=103424 00:14:24.035 13:18:38 -- ftl/common.sh@59 -- # local base_bdev 00:14:24.035 13:18:38 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:24.294 13:18:38 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:24.294 13:18:38 -- ftl/common.sh@62 -- # local base_size 00:14:24.294 13:18:38 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:24.294 13:18:38 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:24.294 13:18:38 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:24.294 13:18:38 -- common/autotest_common.sh@1369 -- # local bs 00:14:24.294 13:18:38 -- common/autotest_common.sh@1370 -- # local nb 00:14:24.294 13:18:38 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:24.553 13:18:38 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:24.553 { 00:14:24.553 "name": "nvme0n1", 00:14:24.553 "aliases": [ 00:14:24.553 "a77c1893-2496-4a34-9fd1-a867e6978f60" 00:14:24.553 ], 00:14:24.553 "product_name": "NVMe disk", 00:14:24.553 "block_size": 4096, 00:14:24.553 "num_blocks": 1310720, 00:14:24.553 "uuid": "a77c1893-2496-4a34-9fd1-a867e6978f60", 00:14:24.553 "assigned_rate_limits": { 00:14:24.553 "rw_ios_per_sec": 0, 00:14:24.553 "rw_mbytes_per_sec": 0, 00:14:24.553 "r_mbytes_per_sec": 0, 00:14:24.553 "w_mbytes_per_sec": 0 00:14:24.553 }, 00:14:24.553 "claimed": false, 00:14:24.553 "zoned": false, 00:14:24.553 "supported_io_types": { 00:14:24.553 "read": true, 00:14:24.553 "write": true, 00:14:24.553 "unmap": true, 00:14:24.553 "write_zeroes": true, 00:14:24.553 "flush": true, 00:14:24.553 "reset": true, 00:14:24.553 "compare": true, 00:14:24.553 "compare_and_write": false, 00:14:24.553 "abort": true, 00:14:24.553 "nvme_admin": true, 00:14:24.553 "nvme_io": true 00:14:24.553 }, 00:14:24.553 "driver_specific": { 00:14:24.553 "nvme": [ 00:14:24.553 { 00:14:24.553 "pci_address": "0000:00:07.0", 00:14:24.553 "trid": { 00:14:24.553 "trtype": "PCIe", 00:14:24.553 "traddr": "0000:00:07.0" 00:14:24.553 }, 00:14:24.553 "ctrlr_data": { 00:14:24.553 "cntlid": 0, 00:14:24.553 "vendor_id": "0x1b36", 00:14:24.553 "model_number": "QEMU NVMe Ctrl", 00:14:24.553 "serial_number": "12341", 00:14:24.553 "firmware_revision": "8.0.0", 00:14:24.553 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:24.553 "oacs": { 00:14:24.553 "security": 0, 00:14:24.553 "format": 1, 00:14:24.553 "firmware": 0, 00:14:24.553 "ns_manage": 1 00:14:24.553 }, 00:14:24.553 "multi_ctrlr": false, 00:14:24.553 "ana_reporting": false 00:14:24.553 }, 00:14:24.553 "vs": { 00:14:24.553 "nvme_version": "1.4" 00:14:24.553 }, 00:14:24.553 "ns_data": { 00:14:24.553 "id": 1, 00:14:24.553 "can_share": false 00:14:24.553 } 00:14:24.553 } 00:14:24.553 ], 00:14:24.553 "mp_policy": "active_passive" 00:14:24.553 } 00:14:24.553 } 00:14:24.553 ]' 00:14:24.553 13:18:38 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:24.553 13:18:38 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:24.553 13:18:38 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:24.553 13:18:38 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:24.553 13:18:38 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:24.553 13:18:38 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:24.553 13:18:38 -- ftl/common.sh@63 -- # base_size=5120 00:14:24.553 13:18:38 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:24.553 13:18:38 -- ftl/common.sh@67 -- # clear_lvols 00:14:24.553 13:18:38 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:24.553 13:18:38 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:24.811 13:18:39 -- ftl/common.sh@28 -- # stores= 00:14:24.811 13:18:39 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:24.811 13:18:39 -- ftl/common.sh@68 -- # lvs=6a6d8365-2251-43d2-9551-6998068c2798 00:14:24.811 13:18:39 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6a6d8365-2251-43d2-9551-6998068c2798 00:14:25.070 13:18:39 -- ftl/fio.sh@48 -- # split_bdev=d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.070 13:18:39 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.070 13:18:39 -- ftl/common.sh@35 -- # local name=nvc0 00:14:25.070 13:18:39 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:25.070 13:18:39 -- ftl/common.sh@37 -- # local base_bdev=d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.070 13:18:39 -- ftl/common.sh@38 -- # local cache_size= 00:14:25.070 13:18:39 -- ftl/common.sh@41 -- # get_bdev_size d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.070 13:18:39 -- common/autotest_common.sh@1367 -- # local bdev_name=d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.070 13:18:39 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:25.070 13:18:39 -- common/autotest_common.sh@1369 -- # local bs 00:14:25.070 13:18:39 -- common/autotest_common.sh@1370 -- # local nb 00:14:25.070 13:18:39 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.328 13:18:39 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:25.328 { 00:14:25.328 "name": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:25.328 "aliases": [ 00:14:25.328 "lvs/nvme0n1p0" 00:14:25.328 ], 00:14:25.328 "product_name": "Logical Volume", 00:14:25.328 "block_size": 4096, 00:14:25.328 "num_blocks": 26476544, 00:14:25.328 "uuid": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:25.328 "assigned_rate_limits": { 00:14:25.328 "rw_ios_per_sec": 0, 00:14:25.328 "rw_mbytes_per_sec": 0, 00:14:25.328 "r_mbytes_per_sec": 0, 00:14:25.328 "w_mbytes_per_sec": 0 00:14:25.328 }, 00:14:25.328 "claimed": false, 00:14:25.328 "zoned": false, 00:14:25.328 "supported_io_types": { 00:14:25.328 "read": true, 00:14:25.328 "write": true, 00:14:25.328 "unmap": true, 00:14:25.328 "write_zeroes": true, 00:14:25.328 "flush": false, 00:14:25.328 "reset": true, 00:14:25.328 "compare": false, 00:14:25.328 "compare_and_write": false, 00:14:25.329 "abort": false, 00:14:25.329 "nvme_admin": false, 00:14:25.329 "nvme_io": false 00:14:25.329 }, 00:14:25.329 "driver_specific": { 00:14:25.329 "lvol": { 00:14:25.329 "lvol_store_uuid": "6a6d8365-2251-43d2-9551-6998068c2798", 00:14:25.329 "base_bdev": "nvme0n1", 00:14:25.329 "thin_provision": true, 00:14:25.329 "snapshot": false, 00:14:25.329 "clone": false, 00:14:25.329 "esnap_clone": false 00:14:25.329 } 00:14:25.329 } 00:14:25.329 } 00:14:25.329 ]' 00:14:25.329 13:18:39 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:25.329 13:18:39 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:25.329 13:18:39 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:25.329 13:18:39 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:25.329 13:18:39 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:25.329 13:18:39 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:25.329 13:18:39 -- ftl/common.sh@41 -- # local base_size=5171 00:14:25.329 13:18:39 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:25.329 13:18:39 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:25.587 13:18:40 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:25.587 13:18:40 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:25.587 13:18:40 -- ftl/common.sh@48 -- # get_bdev_size d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.587 13:18:40 -- common/autotest_common.sh@1367 -- # local bdev_name=d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.587 13:18:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:25.587 13:18:40 -- common/autotest_common.sh@1369 -- # local bs 00:14:25.587 13:18:40 -- common/autotest_common.sh@1370 -- # local nb 00:14:25.587 13:18:40 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:25.846 13:18:40 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:25.846 { 00:14:25.846 "name": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:25.846 "aliases": [ 00:14:25.846 "lvs/nvme0n1p0" 00:14:25.846 ], 00:14:25.846 "product_name": "Logical Volume", 00:14:25.846 "block_size": 4096, 00:14:25.846 "num_blocks": 26476544, 00:14:25.846 "uuid": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:25.846 "assigned_rate_limits": { 00:14:25.846 "rw_ios_per_sec": 0, 00:14:25.846 "rw_mbytes_per_sec": 0, 00:14:25.846 "r_mbytes_per_sec": 0, 00:14:25.846 "w_mbytes_per_sec": 0 00:14:25.846 }, 00:14:25.846 "claimed": false, 00:14:25.846 "zoned": false, 00:14:25.846 "supported_io_types": { 00:14:25.846 "read": true, 00:14:25.846 "write": true, 00:14:25.846 "unmap": true, 00:14:25.846 "write_zeroes": true, 00:14:25.846 "flush": false, 00:14:25.846 "reset": true, 00:14:25.846 "compare": false, 00:14:25.846 "compare_and_write": false, 00:14:25.846 "abort": false, 00:14:25.846 "nvme_admin": false, 00:14:25.846 "nvme_io": false 00:14:25.846 }, 00:14:25.846 "driver_specific": { 00:14:25.846 "lvol": { 00:14:25.846 "lvol_store_uuid": "6a6d8365-2251-43d2-9551-6998068c2798", 00:14:25.846 "base_bdev": "nvme0n1", 00:14:25.846 "thin_provision": true, 00:14:25.846 "snapshot": false, 00:14:25.846 "clone": false, 00:14:25.846 "esnap_clone": false 00:14:25.846 } 00:14:25.846 } 00:14:25.846 } 00:14:25.846 ]' 00:14:25.846 13:18:40 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:25.846 13:18:40 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:25.846 13:18:40 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:25.846 13:18:40 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:25.846 13:18:40 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:25.846 13:18:40 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:25.846 13:18:40 -- ftl/common.sh@48 -- # cache_size=5171 00:14:25.846 13:18:40 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:26.105 13:18:40 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:26.105 13:18:40 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:26.105 13:18:40 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:26.105 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:26.105 13:18:40 -- ftl/fio.sh@56 -- # get_bdev_size d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:26.105 13:18:40 -- common/autotest_common.sh@1367 -- # local bdev_name=d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:26.105 13:18:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:26.105 13:18:40 -- common/autotest_common.sh@1369 -- # local bs 00:14:26.105 13:18:40 -- common/autotest_common.sh@1370 -- # local nb 00:14:26.105 13:18:40 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d8cf8c3c-a704-47c6-851a-3ea96361ad5e 00:14:26.105 13:18:40 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:26.105 { 00:14:26.105 "name": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:26.105 "aliases": [ 00:14:26.105 "lvs/nvme0n1p0" 00:14:26.105 ], 00:14:26.105 "product_name": "Logical Volume", 00:14:26.105 "block_size": 4096, 00:14:26.105 "num_blocks": 26476544, 00:14:26.105 "uuid": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:26.105 "assigned_rate_limits": { 00:14:26.105 "rw_ios_per_sec": 0, 00:14:26.105 "rw_mbytes_per_sec": 0, 00:14:26.105 "r_mbytes_per_sec": 0, 00:14:26.105 "w_mbytes_per_sec": 0 00:14:26.105 }, 00:14:26.105 "claimed": false, 00:14:26.105 "zoned": false, 00:14:26.105 "supported_io_types": { 00:14:26.105 "read": true, 00:14:26.105 "write": true, 00:14:26.105 "unmap": true, 00:14:26.105 "write_zeroes": true, 00:14:26.105 "flush": false, 00:14:26.105 "reset": true, 00:14:26.105 "compare": false, 00:14:26.105 "compare_and_write": false, 00:14:26.105 "abort": false, 00:14:26.105 "nvme_admin": false, 00:14:26.105 "nvme_io": false 00:14:26.105 }, 00:14:26.105 "driver_specific": { 00:14:26.105 "lvol": { 00:14:26.105 "lvol_store_uuid": "6a6d8365-2251-43d2-9551-6998068c2798", 00:14:26.105 "base_bdev": "nvme0n1", 00:14:26.105 "thin_provision": true, 00:14:26.105 "snapshot": false, 00:14:26.105 "clone": false, 00:14:26.105 "esnap_clone": false 00:14:26.105 } 00:14:26.105 } 00:14:26.105 } 00:14:26.105 ]' 00:14:26.105 13:18:40 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:26.105 13:18:40 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:26.105 13:18:40 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:26.364 13:18:40 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:26.364 13:18:40 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:26.364 13:18:40 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:26.364 13:18:40 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:26.364 13:18:40 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:26.364 13:18:40 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d8cf8c3c-a704-47c6-851a-3ea96361ad5e -c nvc0n1p0 --l2p_dram_limit 60 00:14:26.364 [2024-12-16 13:18:40.873850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.874144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:26.364 [2024-12-16 13:18:40.874198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:26.364 [2024-12-16 13:18:40.874233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.874333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.874349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:26.364 [2024-12-16 13:18:40.874359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:14:26.364 [2024-12-16 13:18:40.874365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.874390] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:26.364 [2024-12-16 13:18:40.875006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:26.364 [2024-12-16 13:18:40.875027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.875033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:26.364 [2024-12-16 13:18:40.875041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:14:26.364 [2024-12-16 13:18:40.875046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.875108] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 46aae604-979d-40aa-96e4-fcdd3064d8c9 00:14:26.364 [2024-12-16 13:18:40.876121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.876142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:26.364 [2024-12-16 13:18:40.876150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:14:26.364 [2024-12-16 13:18:40.876158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.881390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.881415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:26.364 [2024-12-16 13:18:40.881422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.167 ms 00:14:26.364 [2024-12-16 13:18:40.881429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.881497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.881506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:26.364 [2024-12-16 13:18:40.881512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:14:26.364 [2024-12-16 13:18:40.881520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.881568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.881577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:26.364 [2024-12-16 13:18:40.881585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:26.364 [2024-12-16 13:18:40.881592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.881620] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:26.364 [2024-12-16 13:18:40.884622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.884652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:26.364 [2024-12-16 13:18:40.884661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:14:26.364 [2024-12-16 13:18:40.884667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.884702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.884709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:26.364 [2024-12-16 13:18:40.884716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:26.364 [2024-12-16 13:18:40.884722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.364 [2024-12-16 13:18:40.884751] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:26.364 [2024-12-16 13:18:40.884838] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:26.364 [2024-12-16 13:18:40.884850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:26.364 [2024-12-16 13:18:40.884858] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:26.364 [2024-12-16 13:18:40.884868] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:26.364 [2024-12-16 13:18:40.884874] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:26.364 [2024-12-16 13:18:40.884882] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:26.364 [2024-12-16 13:18:40.884890] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:26.364 [2024-12-16 13:18:40.884896] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:26.364 [2024-12-16 13:18:40.884902] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:26.364 [2024-12-16 13:18:40.884908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.364 [2024-12-16 13:18:40.884914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:26.364 [2024-12-16 13:18:40.884921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:14:26.364 [2024-12-16 13:18:40.884926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.365 [2024-12-16 13:18:40.884991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.365 [2024-12-16 13:18:40.884997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:26.365 [2024-12-16 13:18:40.885004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:26.365 [2024-12-16 13:18:40.885009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.365 [2024-12-16 13:18:40.885087] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:26.365 [2024-12-16 13:18:40.885094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:26.365 [2024-12-16 13:18:40.885101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:26.365 [2024-12-16 13:18:40.885124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:26.365 [2024-12-16 13:18:40.885142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:26.365 [2024-12-16 13:18:40.885153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:26.365 [2024-12-16 13:18:40.885158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:26.365 [2024-12-16 13:18:40.885165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:26.365 [2024-12-16 13:18:40.885170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:26.365 [2024-12-16 13:18:40.885177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:26.365 [2024-12-16 13:18:40.885183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:26.365 [2024-12-16 13:18:40.885194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:26.365 [2024-12-16 13:18:40.885200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:26.365 [2024-12-16 13:18:40.885211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:26.365 [2024-12-16 13:18:40.885216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:26.365 [2024-12-16 13:18:40.885227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:26.365 [2024-12-16 13:18:40.885245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:26.365 [2024-12-16 13:18:40.885260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:26.365 [2024-12-16 13:18:40.885278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:26.365 [2024-12-16 13:18:40.885308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:26.365 [2024-12-16 13:18:40.885321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:26.365 [2024-12-16 13:18:40.885328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:26.365 [2024-12-16 13:18:40.885332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:26.365 [2024-12-16 13:18:40.885338] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:26.365 [2024-12-16 13:18:40.885344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:26.365 [2024-12-16 13:18:40.885350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.365 [2024-12-16 13:18:40.885362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:26.365 [2024-12-16 13:18:40.885367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:26.365 [2024-12-16 13:18:40.885374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:26.365 [2024-12-16 13:18:40.885380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:26.365 [2024-12-16 13:18:40.885387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:26.365 [2024-12-16 13:18:40.885392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:26.365 [2024-12-16 13:18:40.885399] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:26.365 [2024-12-16 13:18:40.885407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:26.365 [2024-12-16 13:18:40.885415] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:26.365 [2024-12-16 13:18:40.885420] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:26.365 [2024-12-16 13:18:40.885427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:26.365 [2024-12-16 13:18:40.885432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:26.365 [2024-12-16 13:18:40.885438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:26.365 [2024-12-16 13:18:40.885444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:26.365 [2024-12-16 13:18:40.885450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:26.365 [2024-12-16 13:18:40.885455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:26.365 [2024-12-16 13:18:40.885462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:26.365 [2024-12-16 13:18:40.885468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:26.365 [2024-12-16 13:18:40.885475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:26.365 [2024-12-16 13:18:40.885480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:26.365 [2024-12-16 13:18:40.885489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:26.365 [2024-12-16 13:18:40.885494] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:26.365 [2024-12-16 13:18:40.885503] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:26.365 [2024-12-16 13:18:40.885509] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:26.365 [2024-12-16 13:18:40.885516] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:26.365 [2024-12-16 13:18:40.885522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:26.365 [2024-12-16 13:18:40.885529] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:26.365 [2024-12-16 13:18:40.885535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.365 [2024-12-16 13:18:40.885542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:26.365 [2024-12-16 13:18:40.885548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:14:26.365 [2024-12-16 13:18:40.885555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.365 [2024-12-16 13:18:40.897888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.365 [2024-12-16 13:18:40.897917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:26.365 [2024-12-16 13:18:40.897925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.265 ms 00:14:26.365 [2024-12-16 13:18:40.897932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.365 [2024-12-16 13:18:40.898010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.365 [2024-12-16 13:18:40.898019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:26.365 [2024-12-16 13:18:40.898025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:14:26.366 [2024-12-16 13:18:40.898034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.366 [2024-12-16 13:18:40.923570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.366 [2024-12-16 13:18:40.923596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:26.366 [2024-12-16 13:18:40.923604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.489 ms 00:14:26.366 [2024-12-16 13:18:40.923612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.366 [2024-12-16 13:18:40.923650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.366 [2024-12-16 13:18:40.923659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:26.366 [2024-12-16 13:18:40.923666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:26.366 [2024-12-16 13:18:40.923674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.366 [2024-12-16 13:18:40.924009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.366 [2024-12-16 13:18:40.924031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:26.366 [2024-12-16 13:18:40.924038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:14:26.366 [2024-12-16 13:18:40.924045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.366 [2024-12-16 13:18:40.924157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.366 [2024-12-16 13:18:40.924166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:26.366 [2024-12-16 13:18:40.924173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:14:26.366 [2024-12-16 13:18:40.924179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.624 [2024-12-16 13:18:40.946129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.624 [2024-12-16 13:18:40.946173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:26.624 [2024-12-16 13:18:40.946189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.924 ms 00:14:26.624 [2024-12-16 13:18:40.946202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.624 [2024-12-16 13:18:40.959186] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:26.624 [2024-12-16 13:18:40.971978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.624 [2024-12-16 13:18:40.972000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:26.624 [2024-12-16 13:18:40.972010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.641 ms 00:14:26.624 [2024-12-16 13:18:40.972017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.624 [2024-12-16 13:18:41.025726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.624 [2024-12-16 13:18:41.025753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:26.624 [2024-12-16 13:18:41.025763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.670 ms 00:14:26.624 [2024-12-16 13:18:41.025770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.624 [2024-12-16 13:18:41.025809] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:26.624 [2024-12-16 13:18:41.025818] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:29.903 [2024-12-16 13:18:43.741805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.903 [2024-12-16 13:18:43.741855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:29.903 [2024-12-16 13:18:43.741871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2715.984 ms 00:14:29.903 [2024-12-16 13:18:43.741879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.903 [2024-12-16 13:18:43.742074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.903 [2024-12-16 13:18:43.742085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:29.903 [2024-12-16 13:18:43.742096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:14:29.903 [2024-12-16 13:18:43.742106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.903 [2024-12-16 13:18:43.765470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.903 [2024-12-16 13:18:43.765504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:29.903 [2024-12-16 13:18:43.765516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.319 ms 00:14:29.903 [2024-12-16 13:18:43.765524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.903 [2024-12-16 13:18:43.787939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.903 [2024-12-16 13:18:43.787968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:29.903 [2024-12-16 13:18:43.787982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.374 ms 00:14:29.903 [2024-12-16 13:18:43.787990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.788316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.788327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:29.904 [2024-12-16 13:18:43.788336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:14:29.904 [2024-12-16 13:18:43.788343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.850027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.850056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:29.904 [2024-12-16 13:18:43.850068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.641 ms 00:14:29.904 [2024-12-16 13:18:43.850075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.874116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.874144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:29.904 [2024-12-16 13:18:43.874159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.997 ms 00:14:29.904 [2024-12-16 13:18:43.874166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.877978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.878007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:29.904 [2024-12-16 13:18:43.878020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.765 ms 00:14:29.904 [2024-12-16 13:18:43.878027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.901181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.901310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:29.904 [2024-12-16 13:18:43.901330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.112 ms 00:14:29.904 [2024-12-16 13:18:43.901338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.901397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.901407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:29.904 [2024-12-16 13:18:43.901416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:14:29.904 [2024-12-16 13:18:43.901423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.901506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.904 [2024-12-16 13:18:43.901517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:29.904 [2024-12-16 13:18:43.901526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:29.904 [2024-12-16 13:18:43.901533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.904 [2024-12-16 13:18:43.902427] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3028.154 ms, result 0 00:14:29.904 { 00:14:29.904 "name": "ftl0", 00:14:29.904 "uuid": "46aae604-979d-40aa-96e4-fcdd3064d8c9" 00:14:29.904 } 00:14:29.904 13:18:43 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:29.904 13:18:43 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:29.904 13:18:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.904 13:18:43 -- common/autotest_common.sh@899 -- # local i 00:14:29.904 13:18:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.904 13:18:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.904 13:18:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:29.904 13:18:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:29.904 [ 00:14:29.904 { 00:14:29.904 "name": "ftl0", 00:14:29.904 "aliases": [ 00:14:29.904 "46aae604-979d-40aa-96e4-fcdd3064d8c9" 00:14:29.904 ], 00:14:29.904 "product_name": "FTL disk", 00:14:29.904 "block_size": 4096, 00:14:29.904 "num_blocks": 20971520, 00:14:29.904 "uuid": "46aae604-979d-40aa-96e4-fcdd3064d8c9", 00:14:29.904 "assigned_rate_limits": { 00:14:29.904 "rw_ios_per_sec": 0, 00:14:29.904 "rw_mbytes_per_sec": 0, 00:14:29.904 "r_mbytes_per_sec": 0, 00:14:29.904 "w_mbytes_per_sec": 0 00:14:29.904 }, 00:14:29.904 "claimed": false, 00:14:29.904 "zoned": false, 00:14:29.904 "supported_io_types": { 00:14:29.904 "read": true, 00:14:29.904 "write": true, 00:14:29.904 "unmap": true, 00:14:29.904 "write_zeroes": true, 00:14:29.904 "flush": true, 00:14:29.904 "reset": false, 00:14:29.904 "compare": false, 00:14:29.904 "compare_and_write": false, 00:14:29.904 "abort": false, 00:14:29.904 "nvme_admin": false, 00:14:29.904 "nvme_io": false 00:14:29.904 }, 00:14:29.904 "driver_specific": { 00:14:29.904 "ftl": { 00:14:29.904 "base_bdev": "d8cf8c3c-a704-47c6-851a-3ea96361ad5e", 00:14:29.904 "cache": "nvc0n1p0" 00:14:29.904 } 00:14:29.904 } 00:14:29.904 } 00:14:29.904 ] 00:14:29.904 13:18:44 -- common/autotest_common.sh@905 -- # return 0 00:14:29.904 13:18:44 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:29.904 13:18:44 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:30.162 13:18:44 -- ftl/fio.sh@70 -- # echo ']}' 00:14:30.162 13:18:44 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:30.162 [2024-12-16 13:18:44.679452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.679492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:30.162 [2024-12-16 13:18:44.679502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:30.162 [2024-12-16 13:18:44.679510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.679544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:30.162 [2024-12-16 13:18:44.681717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.681741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:30.162 [2024-12-16 13:18:44.681753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:14:30.162 [2024-12-16 13:18:44.681760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.682208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.682219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:30.162 [2024-12-16 13:18:44.682227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:14:30.162 [2024-12-16 13:18:44.682232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.684715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.684731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:30.162 [2024-12-16 13:18:44.684739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:14:30.162 [2024-12-16 13:18:44.684746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.689693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.689713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:30.162 [2024-12-16 13:18:44.689722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:14:30.162 [2024-12-16 13:18:44.689728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.708091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.708115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:30.162 [2024-12-16 13:18:44.708125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.278 ms 00:14:30.162 [2024-12-16 13:18:44.708130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.720494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.720519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:30.162 [2024-12-16 13:18:44.720540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:14:30.162 [2024-12-16 13:18:44.720546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.162 [2024-12-16 13:18:44.720705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.162 [2024-12-16 13:18:44.720716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:30.162 [2024-12-16 13:18:44.720724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:14:30.162 [2024-12-16 13:18:44.720730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.421 [2024-12-16 13:18:44.738765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.421 [2024-12-16 13:18:44.738868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:30.421 [2024-12-16 13:18:44.738883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.010 ms 00:14:30.421 [2024-12-16 13:18:44.738888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.421 [2024-12-16 13:18:44.756612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.421 [2024-12-16 13:18:44.756644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:30.421 [2024-12-16 13:18:44.756654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.687 ms 00:14:30.421 [2024-12-16 13:18:44.756659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.421 [2024-12-16 13:18:44.773939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.421 [2024-12-16 13:18:44.774036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:30.421 [2024-12-16 13:18:44.774050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.244 ms 00:14:30.421 [2024-12-16 13:18:44.774056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.421 [2024-12-16 13:18:44.791382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.421 [2024-12-16 13:18:44.791406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:30.421 [2024-12-16 13:18:44.791415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.252 ms 00:14:30.421 [2024-12-16 13:18:44.791420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.421 [2024-12-16 13:18:44.791457] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:30.421 [2024-12-16 13:18:44.791468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:30.421 [2024-12-16 13:18:44.791552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.791998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:30.422 [2024-12-16 13:18:44.792154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:30.422 [2024-12-16 13:18:44.792161] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46aae604-979d-40aa-96e4-fcdd3064d8c9 00:14:30.422 [2024-12-16 13:18:44.792168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:30.422 [2024-12-16 13:18:44.792174] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:30.422 [2024-12-16 13:18:44.792180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:30.422 [2024-12-16 13:18:44.792186] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:30.422 [2024-12-16 13:18:44.792192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:30.422 [2024-12-16 13:18:44.792199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:30.422 [2024-12-16 13:18:44.792206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:30.422 [2024-12-16 13:18:44.792212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:30.422 [2024-12-16 13:18:44.792217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:30.422 [2024-12-16 13:18:44.792224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.422 [2024-12-16 13:18:44.792230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:30.422 [2024-12-16 13:18:44.792238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:14:30.422 [2024-12-16 13:18:44.792243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.801885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.422 [2024-12-16 13:18:44.801907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:30.422 [2024-12-16 13:18:44.801916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.608 ms 00:14:30.422 [2024-12-16 13:18:44.801922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.802083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.422 [2024-12-16 13:18:44.802090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:30.422 [2024-12-16 13:18:44.802097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:14:30.422 [2024-12-16 13:18:44.802101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.837339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.837435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:30.422 [2024-12-16 13:18:44.837451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.837458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.837519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.837525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:30.422 [2024-12-16 13:18:44.837532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.837538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.837607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.837615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:30.422 [2024-12-16 13:18:44.837622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.837643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.837672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.837679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:30.422 [2024-12-16 13:18:44.837686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.837691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.903335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.903372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:30.422 [2024-12-16 13:18:44.903384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.903391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.925575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.925602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:30.422 [2024-12-16 13:18:44.925611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.925618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.925701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.925709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:30.422 [2024-12-16 13:18:44.925719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.925724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.925786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.925794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:30.422 [2024-12-16 13:18:44.925802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.925807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.925894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.925902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:30.422 [2024-12-16 13:18:44.925909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.925914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.925958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.925966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:30.422 [2024-12-16 13:18:44.925974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.925980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.926022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.926028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:30.422 [2024-12-16 13:18:44.926035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.926041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.926085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:30.422 [2024-12-16 13:18:44.926097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:30.422 [2024-12-16 13:18:44.926105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:30.422 [2024-12-16 13:18:44.926110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.422 [2024-12-16 13:18:44.926241] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 246.770 ms, result 0 00:14:30.422 true 00:14:30.422 13:18:44 -- ftl/fio.sh@75 -- # killprocess 70465 00:14:30.422 13:18:44 -- common/autotest_common.sh@936 -- # '[' -z 70465 ']' 00:14:30.422 13:18:44 -- common/autotest_common.sh@940 -- # kill -0 70465 00:14:30.422 13:18:44 -- common/autotest_common.sh@941 -- # uname 00:14:30.422 13:18:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.422 13:18:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70465 00:14:30.422 13:18:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.422 13:18:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.422 killing process with pid 70465 00:14:30.422 13:18:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70465' 00:14:30.422 13:18:44 -- common/autotest_common.sh@955 -- # kill 70465 00:14:30.423 13:18:44 -- common/autotest_common.sh@960 -- # wait 70465 00:14:36.996 13:18:50 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:36.996 13:18:50 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:36.996 13:18:50 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:36.996 13:18:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.996 13:18:50 -- common/autotest_common.sh@10 -- # set +x 00:14:36.996 13:18:50 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:36.996 13:18:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:36.996 13:18:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:36.996 13:18:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:36.996 13:18:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:36.996 13:18:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:36.996 13:18:50 -- common/autotest_common.sh@1330 -- # shift 00:14:36.996 13:18:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:36.996 13:18:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:36.996 13:18:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:36.996 13:18:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:36.996 13:18:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:36.996 13:18:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:36.996 13:18:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:36.996 13:18:50 -- common/autotest_common.sh@1336 -- # break 00:14:36.996 13:18:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:36.996 13:18:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:36.996 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:36.996 fio-3.35 00:14:36.996 Starting 1 thread 00:14:42.307 00:14:42.307 test: (groupid=0, jobs=1): err= 0: pid=70645: Mon Dec 16 13:18:56 2024 00:14:42.307 read: IOPS=868, BW=57.7MiB/s (60.5MB/s)(255MiB/4412msec) 00:14:42.307 slat (nsec): min=2888, max=30391, avg=4155.08, stdev=2050.16 00:14:42.307 clat (usec): min=240, max=4509, avg=518.75, stdev=191.95 00:14:42.307 lat (usec): min=243, max=4512, avg=522.91, stdev=192.39 00:14:42.307 clat percentiles (usec): 00:14:42.307 | 1.00th=[ 277], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 375], 00:14:42.307 | 30.00th=[ 433], 40.00th=[ 461], 50.00th=[ 498], 60.00th=[ 510], 00:14:42.307 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 824], 95.00th=[ 889], 00:14:42.307 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1467], 99.95th=[ 2671], 00:14:42.307 | 99.99th=[ 4490] 00:14:42.307 write: IOPS=875, BW=58.1MiB/s (61.0MB/s)(256MiB/4405msec); 0 zone resets 00:14:42.307 slat (nsec): min=13231, max=74445, avg=19615.84, stdev=5628.53 00:14:42.307 clat (usec): min=279, max=2117, avg=590.55, stdev=206.57 00:14:42.307 lat (usec): min=300, max=2139, avg=610.16, stdev=207.95 00:14:42.307 clat percentiles (usec): 00:14:42.307 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 437], 00:14:42.307 | 30.00th=[ 469], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 594], 00:14:42.307 | 70.00th=[ 611], 80.00th=[ 668], 90.00th=[ 889], 95.00th=[ 971], 00:14:42.307 | 99.00th=[ 1254], 99.50th=[ 1647], 99.90th=[ 1975], 99.95th=[ 2073], 00:14:42.307 | 99.99th=[ 2114] 00:14:42.307 bw ( KiB/s): min=45424, max=70720, per=98.96%, avg=58905.00, stdev=8966.41, samples=8 00:14:42.307 iops : min= 668, max= 1040, avg=866.25, stdev=131.86, samples=8 00:14:42.307 lat (usec) : 250=0.01%, 500=42.85%, 750=42.15%, 1000=12.47% 00:14:42.307 lat (msec) : 2=2.45%, 4=0.05%, 10=0.01% 00:14:42.307 cpu : usr=99.37%, sys=0.09%, ctx=11, majf=0, minf=1318 00:14:42.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.307 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.307 00:14:42.307 Run status group 0 (all jobs): 00:14:42.307 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=255MiB (267MB), run=4412-4412msec 00:14:42.307 WRITE: bw=58.1MiB/s (61.0MB/s), 58.1MiB/s-58.1MiB/s (61.0MB/s-61.0MB/s), io=256MiB (269MB), run=4405-4405msec 00:14:42.878 ----------------------------------------------------- 00:14:42.878 Suppressions used: 00:14:42.878 count bytes template 00:14:42.878 1 5 /usr/src/fio/parse.c 00:14:42.878 1 8 libtcmalloc_minimal.so 00:14:42.878 1 904 libcrypto.so 00:14:42.878 ----------------------------------------------------- 00:14:42.878 00:14:43.139 13:18:57 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:43.139 13:18:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.139 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.139 13:18:57 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:43.139 13:18:57 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:43.139 13:18:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.139 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.139 13:18:57 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:43.139 13:18:57 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:43.139 13:18:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:43.139 13:18:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:43.139 13:18:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:43.139 13:18:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:43.139 13:18:57 -- common/autotest_common.sh@1330 -- # shift 00:14:43.139 13:18:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:43.139 13:18:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:43.139 13:18:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:43.139 13:18:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:43.139 13:18:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:43.139 13:18:57 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:43.139 13:18:57 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:43.139 13:18:57 -- common/autotest_common.sh@1336 -- # break 00:14:43.139 13:18:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:43.139 13:18:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:43.400 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:43.400 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:43.400 fio-3.35 00:14:43.400 Starting 2 threads 00:15:15.470 00:15:15.470 first_half: (groupid=0, jobs=1): err= 0: pid=70752: Mon Dec 16 13:19:25 2024 00:15:15.470 read: IOPS=2434, BW=9736KiB/s (9970kB/s)(255MiB/26804msec) 00:15:15.470 slat (nsec): min=2917, max=30639, avg=3940.96, stdev=921.34 00:15:15.470 clat (usec): min=781, max=373991, avg=38267.30, stdev=30712.68 00:15:15.470 lat (usec): min=785, max=373996, avg=38271.24, stdev=30712.82 00:15:15.470 clat percentiles (msec): 00:15:15.471 | 1.00th=[ 15], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:15:15.471 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:15.471 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 46], 95.00th=[ 67], 00:15:15.471 | 99.00th=[ 205], 99.50th=[ 262], 99.90th=[ 342], 99.95th=[ 351], 00:15:15.471 | 99.99th=[ 363] 00:15:15.471 write: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(256MiB/21310msec); 0 zone resets 00:15:15.471 slat (usec): min=3, max=3907, avg= 5.48, stdev=19.33 00:15:15.471 clat (usec): min=365, max=134019, avg=14228.13, stdev=23944.86 00:15:15.471 lat (usec): min=381, max=134023, avg=14233.61, stdev=23944.82 00:15:15.471 clat percentiles (usec): 00:15:15.471 | 1.00th=[ 807], 5.00th=[ 1221], 10.00th=[ 1500], 20.00th=[ 1975], 00:15:15.471 | 30.00th=[ 2802], 40.00th=[ 4228], 50.00th=[ 5538], 60.00th=[ 7963], 00:15:15.471 | 70.00th=[ 11469], 80.00th=[ 18482], 90.00th=[ 27132], 95.00th=[ 84411], 00:15:15.471 | 99.00th=[115868], 99.50th=[119014], 99.90th=[127402], 99.95th=[129500], 00:15:15.471 | 99.99th=[132645] 00:15:15.471 bw ( KiB/s): min= 1616, max=39808, per=85.22%, avg=20164.92, stdev=10808.34, samples=26 00:15:15.471 iops : min= 404, max= 9952, avg=5041.23, stdev=2702.08, samples=26 00:15:15.471 lat (usec) : 500=0.01%, 750=0.33%, 1000=0.77% 00:15:15.471 lat (msec) : 2=9.14%, 4=9.45%, 10=13.69%, 20=8.08%, 50=51.60% 00:15:15.471 lat (msec) : 100=3.64%, 250=2.98%, 500=0.30% 00:15:15.471 cpu : usr=99.49%, sys=0.13%, ctx=51, majf=0, minf=5557 00:15:15.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:15.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.471 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.471 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.471 second_half: (groupid=0, jobs=1): err= 0: pid=70753: Mon Dec 16 13:19:25 2024 00:15:15.471 read: IOPS=2420, BW=9683KiB/s (9916kB/s)(255MiB/26969msec) 00:15:15.471 slat (nsec): min=2925, max=51568, avg=4801.62, stdev=1259.79 00:15:15.471 clat (usec): min=851, max=446954, avg=36860.35, stdev=30044.26 00:15:15.471 lat (usec): min=856, max=446957, avg=36865.15, stdev=30044.29 00:15:15.471 clat percentiles (msec): 00:15:15.471 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:15:15.471 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:15.471 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 52], 00:15:15.471 | 99.00th=[ 199], 99.50th=[ 266], 99.90th=[ 376], 99.95th=[ 426], 00:15:15.471 | 99.99th=[ 443] 00:15:15.471 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(256MiB/22157msec); 0 zone resets 00:15:15.471 slat (usec): min=3, max=1685, avg= 6.05, stdev=14.84 00:15:15.471 clat (usec): min=340, max=134768, avg=15933.16, stdev=25266.77 00:15:15.471 lat (usec): min=347, max=134774, avg=15939.22, stdev=25266.67 00:15:15.471 clat percentiles (usec): 00:15:15.471 | 1.00th=[ 1090], 5.00th=[ 1434], 10.00th=[ 1680], 20.00th=[ 2180], 00:15:15.471 | 30.00th=[ 3458], 40.00th=[ 5407], 50.00th=[ 7308], 60.00th=[ 9765], 00:15:15.471 | 70.00th=[ 12387], 80.00th=[ 18482], 90.00th=[ 32113], 95.00th=[ 85459], 00:15:15.471 | 99.00th=[117965], 99.50th=[120062], 99.90th=[129500], 99.95th=[131597], 00:15:15.471 | 99.99th=[132645] 00:15:15.471 bw ( KiB/s): min= 3080, max=41480, per=82.07%, avg=19419.48, stdev=11524.51, samples=27 00:15:15.471 iops : min= 770, max=10370, avg=4854.85, stdev=2881.13, samples=27 00:15:15.471 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.24% 00:15:15.471 lat (msec) : 2=8.30%, 4=7.67%, 10=14.51%, 20=11.45%, 50=51.09% 00:15:15.471 lat (msec) : 100=3.35%, 250=3.03%, 500=0.31% 00:15:15.471 cpu : usr=99.45%, sys=0.16%, ctx=39, majf=0, minf=5562 00:15:15.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:15.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.471 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.471 issued rwts: total=65288,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.471 00:15:15.471 Run status group 0 (all jobs): 00:15:15.471 READ: bw=18.9MiB/s (19.8MB/s), 9683KiB/s-9736KiB/s (9916kB/s-9970kB/s), io=510MiB (535MB), run=26804-26969msec 00:15:15.471 WRITE: bw=23.1MiB/s (24.2MB/s), 11.6MiB/s-12.0MiB/s (12.1MB/s-12.6MB/s), io=512MiB (537MB), run=21310-22157msec 00:15:15.471 ----------------------------------------------------- 00:15:15.471 Suppressions used: 00:15:15.471 count bytes template 00:15:15.471 2 10 /usr/src/fio/parse.c 00:15:15.471 3 288 /usr/src/fio/iolog.c 00:15:15.471 1 8 libtcmalloc_minimal.so 00:15:15.471 1 904 libcrypto.so 00:15:15.471 ----------------------------------------------------- 00:15:15.471 00:15:15.471 13:19:27 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:15.471 13:19:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.471 13:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:15.471 13:19:27 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:15.471 13:19:27 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:15.471 13:19:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.471 13:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:15.471 13:19:27 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:15.471 13:19:27 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:15.471 13:19:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:15.471 13:19:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.471 13:19:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:15.471 13:19:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.471 13:19:27 -- common/autotest_common.sh@1330 -- # shift 00:15:15.471 13:19:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:15.471 13:19:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.471 13:19:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.471 13:19:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:15.471 13:19:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:15.471 13:19:27 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.471 13:19:27 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.471 13:19:27 -- common/autotest_common.sh@1336 -- # break 00:15:15.471 13:19:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.471 13:19:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:15.471 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:15.471 fio-3.35 00:15:15.471 Starting 1 thread 00:15:27.666 00:15:27.666 test: (groupid=0, jobs=1): err= 0: pid=71094: Mon Dec 16 13:19:41 2024 00:15:27.666 read: IOPS=8312, BW=32.5MiB/s (34.0MB/s)(255MiB/7844msec) 00:15:27.666 slat (nsec): min=2883, max=15724, avg=3327.46, stdev=530.61 00:15:27.666 clat (usec): min=691, max=31979, avg=15392.20, stdev=1888.26 00:15:27.666 lat (usec): min=695, max=31982, avg=15395.52, stdev=1888.27 00:15:27.666 clat percentiles (usec): 00:15:27.666 | 1.00th=[12911], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:15:27.666 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:15:27.666 | 70.00th=[15533], 80.00th=[16057], 90.00th=[17171], 95.00th=[19530], 00:15:27.666 | 99.00th=[22676], 99.50th=[24249], 99.90th=[28181], 99.95th=[29492], 00:15:27.666 | 99.99th=[31589] 00:15:27.667 write: IOPS=12.8k, BW=49.8MiB/s (52.2MB/s)(256MiB/5138msec); 0 zone resets 00:15:27.667 slat (usec): min=3, max=672, avg= 4.90, stdev= 4.78 00:15:27.667 clat (usec): min=511, max=57103, avg=9991.52, stdev=11759.96 00:15:27.667 lat (usec): min=515, max=57108, avg=9996.43, stdev=11759.92 00:15:27.667 clat percentiles (usec): 00:15:27.667 | 1.00th=[ 857], 5.00th=[ 1074], 10.00th=[ 1221], 20.00th=[ 1434], 00:15:27.667 | 30.00th=[ 1696], 40.00th=[ 2343], 50.00th=[ 6521], 60.00th=[ 7898], 00:15:27.667 | 70.00th=[ 9634], 80.00th=[12387], 90.00th=[34866], 95.00th=[36963], 00:15:27.667 | 99.00th=[40109], 99.50th=[41157], 99.90th=[45876], 99.95th=[47449], 00:15:27.667 | 99.99th=[52691] 00:15:27.667 bw ( KiB/s): min=11464, max=65520, per=93.42%, avg=47662.55, stdev=14241.30, samples=11 00:15:27.667 iops : min= 2866, max=16380, avg=11915.64, stdev=3560.32, samples=11 00:15:27.667 lat (usec) : 750=0.16%, 1000=1.39% 00:15:27.667 lat (msec) : 2=16.90%, 4=2.58%, 10=14.89%, 20=54.00%, 50=10.06% 00:15:27.667 lat (msec) : 100=0.01% 00:15:27.667 cpu : usr=99.41%, sys=0.17%, ctx=31, majf=0, minf=5567 00:15:27.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:27.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.667 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:27.667 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:27.667 00:15:27.667 Run status group 0 (all jobs): 00:15:27.667 READ: bw=32.5MiB/s (34.0MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=255MiB (267MB), run=7844-7844msec 00:15:27.667 WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=256MiB (268MB), run=5138-5138msec 00:15:29.050 ----------------------------------------------------- 00:15:29.050 Suppressions used: 00:15:29.050 count bytes template 00:15:29.050 1 5 /usr/src/fio/parse.c 00:15:29.050 2 192 /usr/src/fio/iolog.c 00:15:29.050 1 8 libtcmalloc_minimal.so 00:15:29.050 1 904 libcrypto.so 00:15:29.050 ----------------------------------------------------- 00:15:29.050 00:15:29.050 13:19:43 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:29.050 13:19:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.050 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.050 13:19:43 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:29.050 Remove shared memory files 00:15:29.050 13:19:43 -- ftl/fio.sh@85 -- # remove_shm 00:15:29.050 13:19:43 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:29.050 13:19:43 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:29.050 13:19:43 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:29.050 13:19:43 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56179 /dev/shm/spdk_tgt_trace.pid69372 00:15:29.050 13:19:43 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:29.050 13:19:43 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:29.050 00:15:29.050 real 1m5.915s 00:15:29.050 user 2m27.215s 00:15:29.050 sys 0m2.888s 00:15:29.050 ************************************ 00:15:29.050 END TEST ftl_fio_basic 00:15:29.050 ************************************ 00:15:29.050 13:19:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:29.050 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.050 13:19:43 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:29.050 13:19:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:29.050 13:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.050 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.050 ************************************ 00:15:29.050 START TEST ftl_bdevperf 00:15:29.050 ************************************ 00:15:29.050 13:19:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:29.050 * Looking for test storage... 00:15:29.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:29.050 13:19:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:29.050 13:19:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:29.050 13:19:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:29.050 13:19:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:29.050 13:19:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:29.050 13:19:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:29.050 13:19:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:29.050 13:19:43 -- scripts/common.sh@335 -- # IFS=.-: 00:15:29.050 13:19:43 -- scripts/common.sh@335 -- # read -ra ver1 00:15:29.050 13:19:43 -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.050 13:19:43 -- scripts/common.sh@336 -- # read -ra ver2 00:15:29.050 13:19:43 -- scripts/common.sh@337 -- # local 'op=<' 00:15:29.050 13:19:43 -- scripts/common.sh@339 -- # ver1_l=2 00:15:29.050 13:19:43 -- scripts/common.sh@340 -- # ver2_l=1 00:15:29.050 13:19:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:29.050 13:19:43 -- scripts/common.sh@343 -- # case "$op" in 00:15:29.050 13:19:43 -- scripts/common.sh@344 -- # : 1 00:15:29.050 13:19:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:29.050 13:19:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.050 13:19:43 -- scripts/common.sh@364 -- # decimal 1 00:15:29.050 13:19:43 -- scripts/common.sh@352 -- # local d=1 00:15:29.050 13:19:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.050 13:19:43 -- scripts/common.sh@354 -- # echo 1 00:15:29.050 13:19:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:29.050 13:19:43 -- scripts/common.sh@365 -- # decimal 2 00:15:29.050 13:19:43 -- scripts/common.sh@352 -- # local d=2 00:15:29.050 13:19:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.050 13:19:43 -- scripts/common.sh@354 -- # echo 2 00:15:29.050 13:19:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:29.050 13:19:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:29.050 13:19:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:29.050 13:19:43 -- scripts/common.sh@367 -- # return 0 00:15:29.050 13:19:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.050 13:19:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:29.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.050 --rc genhtml_branch_coverage=1 00:15:29.050 --rc genhtml_function_coverage=1 00:15:29.050 --rc genhtml_legend=1 00:15:29.050 --rc geninfo_all_blocks=1 00:15:29.050 --rc geninfo_unexecuted_blocks=1 00:15:29.050 00:15:29.050 ' 00:15:29.050 13:19:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:29.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.050 --rc genhtml_branch_coverage=1 00:15:29.050 --rc genhtml_function_coverage=1 00:15:29.050 --rc genhtml_legend=1 00:15:29.050 --rc geninfo_all_blocks=1 00:15:29.050 --rc geninfo_unexecuted_blocks=1 00:15:29.050 00:15:29.050 ' 00:15:29.050 13:19:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:29.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.050 --rc genhtml_branch_coverage=1 00:15:29.050 --rc genhtml_function_coverage=1 00:15:29.050 --rc genhtml_legend=1 00:15:29.050 --rc geninfo_all_blocks=1 00:15:29.050 --rc geninfo_unexecuted_blocks=1 00:15:29.050 00:15:29.050 ' 00:15:29.050 13:19:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:29.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.050 --rc genhtml_branch_coverage=1 00:15:29.050 --rc genhtml_function_coverage=1 00:15:29.050 --rc genhtml_legend=1 00:15:29.050 --rc geninfo_all_blocks=1 00:15:29.050 --rc geninfo_unexecuted_blocks=1 00:15:29.050 00:15:29.050 ' 00:15:29.050 13:19:43 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:29.050 13:19:43 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:29.050 13:19:43 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:29.050 13:19:43 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:29.050 13:19:43 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:29.050 13:19:43 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:29.050 13:19:43 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.050 13:19:43 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:29.050 13:19:43 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:29.050 13:19:43 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.050 13:19:43 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.050 13:19:43 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:29.050 13:19:43 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:29.050 13:19:43 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:29.050 13:19:43 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:29.050 13:19:43 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:29.050 13:19:43 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:29.050 13:19:43 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.050 13:19:43 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:29.050 13:19:43 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:29.050 13:19:43 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:29.050 13:19:43 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:29.050 13:19:43 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:29.050 13:19:43 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:29.050 13:19:43 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:29.050 13:19:43 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:29.050 13:19:43 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:29.051 13:19:43 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:29.051 13:19:43 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:29.051 13:19:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.051 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71333 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:29.051 13:19:43 -- ftl/bdevperf.sh@22 -- # waitforlisten 71333 00:15:29.051 13:19:43 -- common/autotest_common.sh@829 -- # '[' -z 71333 ']' 00:15:29.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.051 13:19:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.051 13:19:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.051 13:19:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.051 13:19:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.051 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.310 [2024-12-16 13:19:43.627052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:29.310 [2024-12-16 13:19:43.627304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71333 ] 00:15:29.310 [2024-12-16 13:19:43.778316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.568 [2024-12-16 13:19:43.918059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.134 13:19:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.134 13:19:44 -- common/autotest_common.sh@862 -- # return 0 00:15:30.134 13:19:44 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:30.134 13:19:44 -- ftl/common.sh@54 -- # local name=nvme0 00:15:30.134 13:19:44 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:30.134 13:19:44 -- ftl/common.sh@56 -- # local size=103424 00:15:30.134 13:19:44 -- ftl/common.sh@59 -- # local base_bdev 00:15:30.134 13:19:44 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:30.134 13:19:44 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:30.134 13:19:44 -- ftl/common.sh@62 -- # local base_size 00:15:30.134 13:19:44 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:30.134 13:19:44 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:30.134 13:19:44 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:30.134 13:19:44 -- common/autotest_common.sh@1369 -- # local bs 00:15:30.134 13:19:44 -- common/autotest_common.sh@1370 -- # local nb 00:15:30.134 13:19:44 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:30.393 13:19:44 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:30.393 { 00:15:30.393 "name": "nvme0n1", 00:15:30.393 "aliases": [ 00:15:30.393 "6cd18e00-4ecf-4ae3-be25-f43b480df8f3" 00:15:30.393 ], 00:15:30.393 "product_name": "NVMe disk", 00:15:30.393 "block_size": 4096, 00:15:30.393 "num_blocks": 1310720, 00:15:30.393 "uuid": "6cd18e00-4ecf-4ae3-be25-f43b480df8f3", 00:15:30.393 "assigned_rate_limits": { 00:15:30.393 "rw_ios_per_sec": 0, 00:15:30.393 "rw_mbytes_per_sec": 0, 00:15:30.393 "r_mbytes_per_sec": 0, 00:15:30.393 "w_mbytes_per_sec": 0 00:15:30.393 }, 00:15:30.393 "claimed": true, 00:15:30.393 "claim_type": "read_many_write_one", 00:15:30.393 "zoned": false, 00:15:30.393 "supported_io_types": { 00:15:30.393 "read": true, 00:15:30.393 "write": true, 00:15:30.393 "unmap": true, 00:15:30.393 "write_zeroes": true, 00:15:30.393 "flush": true, 00:15:30.393 "reset": true, 00:15:30.393 "compare": true, 00:15:30.393 "compare_and_write": false, 00:15:30.393 "abort": true, 00:15:30.393 "nvme_admin": true, 00:15:30.393 "nvme_io": true 00:15:30.393 }, 00:15:30.393 "driver_specific": { 00:15:30.393 "nvme": [ 00:15:30.393 { 00:15:30.393 "pci_address": "0000:00:07.0", 00:15:30.393 "trid": { 00:15:30.393 "trtype": "PCIe", 00:15:30.393 "traddr": "0000:00:07.0" 00:15:30.393 }, 00:15:30.393 "ctrlr_data": { 00:15:30.393 "cntlid": 0, 00:15:30.393 "vendor_id": "0x1b36", 00:15:30.393 "model_number": "QEMU NVMe Ctrl", 00:15:30.393 "serial_number": "12341", 00:15:30.393 "firmware_revision": "8.0.0", 00:15:30.393 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:30.393 "oacs": { 00:15:30.393 "security": 0, 00:15:30.393 "format": 1, 00:15:30.393 "firmware": 0, 00:15:30.393 "ns_manage": 1 00:15:30.393 }, 00:15:30.393 "multi_ctrlr": false, 00:15:30.393 "ana_reporting": false 00:15:30.393 }, 00:15:30.393 "vs": { 00:15:30.393 "nvme_version": "1.4" 00:15:30.393 }, 00:15:30.393 "ns_data": { 00:15:30.393 "id": 1, 00:15:30.393 "can_share": false 00:15:30.393 } 00:15:30.393 } 00:15:30.393 ], 00:15:30.393 "mp_policy": "active_passive" 00:15:30.393 } 00:15:30.393 } 00:15:30.393 ]' 00:15:30.393 13:19:44 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:30.393 13:19:44 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:30.393 13:19:44 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:30.393 13:19:44 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:30.393 13:19:44 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:30.393 13:19:44 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:30.393 13:19:44 -- ftl/common.sh@63 -- # base_size=5120 00:15:30.393 13:19:44 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:30.393 13:19:44 -- ftl/common.sh@67 -- # clear_lvols 00:15:30.393 13:19:44 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:30.393 13:19:44 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:30.651 13:19:45 -- ftl/common.sh@28 -- # stores=6a6d8365-2251-43d2-9551-6998068c2798 00:15:30.651 13:19:45 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:30.651 13:19:45 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a6d8365-2251-43d2-9551-6998068c2798 00:15:30.908 13:19:45 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:30.908 13:19:45 -- ftl/common.sh@68 -- # lvs=1bf5ab2e-fe97-4beb-966f-89384d5f82a8 00:15:30.908 13:19:45 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1bf5ab2e-fe97-4beb-966f-89384d5f82a8 00:15:31.166 13:19:45 -- ftl/bdevperf.sh@23 -- # split_bdev=020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.166 13:19:45 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.166 13:19:45 -- ftl/common.sh@35 -- # local name=nvc0 00:15:31.166 13:19:45 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:31.166 13:19:45 -- ftl/common.sh@37 -- # local base_bdev=020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.166 13:19:45 -- ftl/common.sh@38 -- # local cache_size= 00:15:31.166 13:19:45 -- ftl/common.sh@41 -- # get_bdev_size 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.166 13:19:45 -- common/autotest_common.sh@1367 -- # local bdev_name=020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.166 13:19:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:31.166 13:19:45 -- common/autotest_common.sh@1369 -- # local bs 00:15:31.166 13:19:45 -- common/autotest_common.sh@1370 -- # local nb 00:15:31.166 13:19:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.424 13:19:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:31.424 { 00:15:31.424 "name": "020307fe-90ce-4430-9c75-fcc8866b905e", 00:15:31.424 "aliases": [ 00:15:31.424 "lvs/nvme0n1p0" 00:15:31.424 ], 00:15:31.424 "product_name": "Logical Volume", 00:15:31.424 "block_size": 4096, 00:15:31.424 "num_blocks": 26476544, 00:15:31.424 "uuid": "020307fe-90ce-4430-9c75-fcc8866b905e", 00:15:31.424 "assigned_rate_limits": { 00:15:31.424 "rw_ios_per_sec": 0, 00:15:31.424 "rw_mbytes_per_sec": 0, 00:15:31.424 "r_mbytes_per_sec": 0, 00:15:31.424 "w_mbytes_per_sec": 0 00:15:31.424 }, 00:15:31.424 "claimed": false, 00:15:31.424 "zoned": false, 00:15:31.424 "supported_io_types": { 00:15:31.424 "read": true, 00:15:31.424 "write": true, 00:15:31.424 "unmap": true, 00:15:31.424 "write_zeroes": true, 00:15:31.424 "flush": false, 00:15:31.424 "reset": true, 00:15:31.424 "compare": false, 00:15:31.424 "compare_and_write": false, 00:15:31.424 "abort": false, 00:15:31.424 "nvme_admin": false, 00:15:31.424 "nvme_io": false 00:15:31.424 }, 00:15:31.424 "driver_specific": { 00:15:31.424 "lvol": { 00:15:31.424 "lvol_store_uuid": "1bf5ab2e-fe97-4beb-966f-89384d5f82a8", 00:15:31.424 "base_bdev": "nvme0n1", 00:15:31.424 "thin_provision": true, 00:15:31.424 "snapshot": false, 00:15:31.424 "clone": false, 00:15:31.424 "esnap_clone": false 00:15:31.424 } 00:15:31.424 } 00:15:31.424 } 00:15:31.424 ]' 00:15:31.424 13:19:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:31.424 13:19:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:31.424 13:19:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:31.424 13:19:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:31.424 13:19:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:31.424 13:19:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:31.424 13:19:45 -- ftl/common.sh@41 -- # local base_size=5171 00:15:31.424 13:19:45 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:31.424 13:19:45 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:31.683 13:19:46 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:31.683 13:19:46 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:31.683 13:19:46 -- ftl/common.sh@48 -- # get_bdev_size 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.683 13:19:46 -- common/autotest_common.sh@1367 -- # local bdev_name=020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.683 13:19:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:31.683 13:19:46 -- common/autotest_common.sh@1369 -- # local bs 00:15:31.683 13:19:46 -- common/autotest_common.sh@1370 -- # local nb 00:15:31.683 13:19:46 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:31.941 13:19:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:31.941 { 00:15:31.941 "name": "020307fe-90ce-4430-9c75-fcc8866b905e", 00:15:31.941 "aliases": [ 00:15:31.941 "lvs/nvme0n1p0" 00:15:31.941 ], 00:15:31.941 "product_name": "Logical Volume", 00:15:31.941 "block_size": 4096, 00:15:31.941 "num_blocks": 26476544, 00:15:31.941 "uuid": "020307fe-90ce-4430-9c75-fcc8866b905e", 00:15:31.941 "assigned_rate_limits": { 00:15:31.941 "rw_ios_per_sec": 0, 00:15:31.941 "rw_mbytes_per_sec": 0, 00:15:31.941 "r_mbytes_per_sec": 0, 00:15:31.941 "w_mbytes_per_sec": 0 00:15:31.941 }, 00:15:31.941 "claimed": false, 00:15:31.941 "zoned": false, 00:15:31.941 "supported_io_types": { 00:15:31.941 "read": true, 00:15:31.941 "write": true, 00:15:31.941 "unmap": true, 00:15:31.941 "write_zeroes": true, 00:15:31.941 "flush": false, 00:15:31.941 "reset": true, 00:15:31.941 "compare": false, 00:15:31.941 "compare_and_write": false, 00:15:31.941 "abort": false, 00:15:31.941 "nvme_admin": false, 00:15:31.941 "nvme_io": false 00:15:31.941 }, 00:15:31.941 "driver_specific": { 00:15:31.941 "lvol": { 00:15:31.941 "lvol_store_uuid": "1bf5ab2e-fe97-4beb-966f-89384d5f82a8", 00:15:31.941 "base_bdev": "nvme0n1", 00:15:31.941 "thin_provision": true, 00:15:31.941 "snapshot": false, 00:15:31.941 "clone": false, 00:15:31.941 "esnap_clone": false 00:15:31.941 } 00:15:31.941 } 00:15:31.941 } 00:15:31.941 ]' 00:15:31.941 13:19:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:31.941 13:19:46 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:31.941 13:19:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:31.941 13:19:46 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:31.941 13:19:46 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:31.941 13:19:46 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:31.941 13:19:46 -- ftl/common.sh@48 -- # cache_size=5171 00:15:31.941 13:19:46 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:32.199 13:19:46 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:32.199 13:19:46 -- ftl/bdevperf.sh@26 -- # get_bdev_size 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:32.199 13:19:46 -- common/autotest_common.sh@1367 -- # local bdev_name=020307fe-90ce-4430-9c75-fcc8866b905e 00:15:32.199 13:19:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:32.199 13:19:46 -- common/autotest_common.sh@1369 -- # local bs 00:15:32.199 13:19:46 -- common/autotest_common.sh@1370 -- # local nb 00:15:32.199 13:19:46 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 020307fe-90ce-4430-9c75-fcc8866b905e 00:15:32.199 13:19:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:32.199 { 00:15:32.199 "name": "020307fe-90ce-4430-9c75-fcc8866b905e", 00:15:32.199 "aliases": [ 00:15:32.199 "lvs/nvme0n1p0" 00:15:32.199 ], 00:15:32.199 "product_name": "Logical Volume", 00:15:32.199 "block_size": 4096, 00:15:32.199 "num_blocks": 26476544, 00:15:32.199 "uuid": "020307fe-90ce-4430-9c75-fcc8866b905e", 00:15:32.199 "assigned_rate_limits": { 00:15:32.199 "rw_ios_per_sec": 0, 00:15:32.199 "rw_mbytes_per_sec": 0, 00:15:32.199 "r_mbytes_per_sec": 0, 00:15:32.199 "w_mbytes_per_sec": 0 00:15:32.199 }, 00:15:32.199 "claimed": false, 00:15:32.199 "zoned": false, 00:15:32.199 "supported_io_types": { 00:15:32.199 "read": true, 00:15:32.199 "write": true, 00:15:32.199 "unmap": true, 00:15:32.199 "write_zeroes": true, 00:15:32.199 "flush": false, 00:15:32.199 "reset": true, 00:15:32.199 "compare": false, 00:15:32.199 "compare_and_write": false, 00:15:32.199 "abort": false, 00:15:32.199 "nvme_admin": false, 00:15:32.199 "nvme_io": false 00:15:32.199 }, 00:15:32.199 "driver_specific": { 00:15:32.199 "lvol": { 00:15:32.199 "lvol_store_uuid": "1bf5ab2e-fe97-4beb-966f-89384d5f82a8", 00:15:32.199 "base_bdev": "nvme0n1", 00:15:32.199 "thin_provision": true, 00:15:32.199 "snapshot": false, 00:15:32.199 "clone": false, 00:15:32.199 "esnap_clone": false 00:15:32.199 } 00:15:32.199 } 00:15:32.199 } 00:15:32.199 ]' 00:15:32.459 13:19:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:32.459 13:19:46 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:32.459 13:19:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:32.459 13:19:46 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:32.459 13:19:46 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:32.459 13:19:46 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:32.459 13:19:46 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:32.459 13:19:46 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 020307fe-90ce-4430-9c75-fcc8866b905e -c nvc0n1p0 --l2p_dram_limit 20 00:15:32.459 [2024-12-16 13:19:47.003505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.003545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:32.459 [2024-12-16 13:19:47.003557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:32.459 [2024-12-16 13:19:47.003563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.003596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.003603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:32.459 [2024-12-16 13:19:47.003611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:32.459 [2024-12-16 13:19:47.003617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.003643] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:32.459 [2024-12-16 13:19:47.004206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:32.459 [2024-12-16 13:19:47.004226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.004232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:32.459 [2024-12-16 13:19:47.004240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:15:32.459 [2024-12-16 13:19:47.004246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.004312] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dc868303-a255-4efc-afb3-6df212e7a45d 00:15:32.459 [2024-12-16 13:19:47.005262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.005284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:32.459 [2024-12-16 13:19:47.005291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:32.459 [2024-12-16 13:19:47.005298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.010056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.010088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:32.459 [2024-12-16 13:19:47.010095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.732 ms 00:15:32.459 [2024-12-16 13:19:47.010102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.010166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.010174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:32.459 [2024-12-16 13:19:47.010181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:32.459 [2024-12-16 13:19:47.010190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.010226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.010234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:32.459 [2024-12-16 13:19:47.010242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:32.459 [2024-12-16 13:19:47.010248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.010263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:32.459 [2024-12-16 13:19:47.013214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.013237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:32.459 [2024-12-16 13:19:47.013246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:15:32.459 [2024-12-16 13:19:47.013251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.013277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.013283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:32.459 [2024-12-16 13:19:47.013290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:32.459 [2024-12-16 13:19:47.013295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.013307] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:32.459 [2024-12-16 13:19:47.013397] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:32.459 [2024-12-16 13:19:47.013409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:32.459 [2024-12-16 13:19:47.013417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:32.459 [2024-12-16 13:19:47.013426] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:32.459 [2024-12-16 13:19:47.013433] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:32.459 [2024-12-16 13:19:47.013440] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:32.459 [2024-12-16 13:19:47.013445] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:32.459 [2024-12-16 13:19:47.013455] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:32.459 [2024-12-16 13:19:47.013461] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:32.459 [2024-12-16 13:19:47.013468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.459 [2024-12-16 13:19:47.013474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:32.459 [2024-12-16 13:19:47.013480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:15:32.459 [2024-12-16 13:19:47.013486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.459 [2024-12-16 13:19:47.013532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.460 [2024-12-16 13:19:47.013538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:32.460 [2024-12-16 13:19:47.013544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:32.460 [2024-12-16 13:19:47.013550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.460 [2024-12-16 13:19:47.013604] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:32.460 [2024-12-16 13:19:47.013610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:32.460 [2024-12-16 13:19:47.013618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013647] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:32.460 [2024-12-16 13:19:47.013652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:32.460 [2024-12-16 13:19:47.013670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:32.460 [2024-12-16 13:19:47.013682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:32.460 [2024-12-16 13:19:47.013687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:32.460 [2024-12-16 13:19:47.013694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:32.460 [2024-12-16 13:19:47.013699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:32.460 [2024-12-16 13:19:47.013706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:32.460 [2024-12-16 13:19:47.013710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:32.460 [2024-12-16 13:19:47.013722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:32.460 [2024-12-16 13:19:47.013730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:32.460 [2024-12-16 13:19:47.013742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:32.460 [2024-12-16 13:19:47.013747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:32.460 [2024-12-16 13:19:47.013758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:32.460 [2024-12-16 13:19:47.013776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:32.460 [2024-12-16 13:19:47.013791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:32.460 [2024-12-16 13:19:47.013809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:32.460 [2024-12-16 13:19:47.013825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:32.460 [2024-12-16 13:19:47.013837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:32.460 [2024-12-16 13:19:47.013843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:32.460 [2024-12-16 13:19:47.013848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:32.460 [2024-12-16 13:19:47.013854] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:32.460 [2024-12-16 13:19:47.013859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:32.460 [2024-12-16 13:19:47.013866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.460 [2024-12-16 13:19:47.013878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:32.460 [2024-12-16 13:19:47.013883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:32.460 [2024-12-16 13:19:47.013889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:32.460 [2024-12-16 13:19:47.013895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:32.460 [2024-12-16 13:19:47.013902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:32.460 [2024-12-16 13:19:47.013907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:32.460 [2024-12-16 13:19:47.013914] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:32.460 [2024-12-16 13:19:47.013921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:32.460 [2024-12-16 13:19:47.013930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:32.460 [2024-12-16 13:19:47.013936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:32.460 [2024-12-16 13:19:47.013943] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:32.460 [2024-12-16 13:19:47.013948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:32.460 [2024-12-16 13:19:47.013955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:32.460 [2024-12-16 13:19:47.013959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:32.460 [2024-12-16 13:19:47.013966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:32.460 [2024-12-16 13:19:47.013971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:32.460 [2024-12-16 13:19:47.013978] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:32.460 [2024-12-16 13:19:47.013983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:32.460 [2024-12-16 13:19:47.013990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:32.460 [2024-12-16 13:19:47.013995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:32.460 [2024-12-16 13:19:47.014003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:32.460 [2024-12-16 13:19:47.014009] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:32.460 [2024-12-16 13:19:47.014016] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:32.460 [2024-12-16 13:19:47.014022] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:32.460 [2024-12-16 13:19:47.014029] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:32.460 [2024-12-16 13:19:47.014034] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:32.460 [2024-12-16 13:19:47.014041] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:32.460 [2024-12-16 13:19:47.014046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.460 [2024-12-16 13:19:47.014052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:32.460 [2024-12-16 13:19:47.014058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:15:32.460 [2024-12-16 13:19:47.014065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.460 [2024-12-16 13:19:47.026238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.460 [2024-12-16 13:19:47.026339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:32.460 [2024-12-16 13:19:47.026380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.149 ms 00:15:32.461 [2024-12-16 13:19:47.026399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.461 [2024-12-16 13:19:47.026475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.461 [2024-12-16 13:19:47.026494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:32.461 [2024-12-16 13:19:47.026509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:32.461 [2024-12-16 13:19:47.026525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.719 [2024-12-16 13:19:47.067586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.719 [2024-12-16 13:19:47.067715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:32.719 [2024-12-16 13:19:47.067764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.018 ms 00:15:32.719 [2024-12-16 13:19:47.067785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.719 [2024-12-16 13:19:47.067822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.719 [2024-12-16 13:19:47.067847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:32.719 [2024-12-16 13:19:47.067863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:32.719 [2024-12-16 13:19:47.067879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.719 [2024-12-16 13:19:47.068234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.719 [2024-12-16 13:19:47.068271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:32.719 [2024-12-16 13:19:47.068287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:15:32.719 [2024-12-16 13:19:47.068303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.719 [2024-12-16 13:19:47.068396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.719 [2024-12-16 13:19:47.068466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:32.719 [2024-12-16 13:19:47.068486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:15:32.719 [2024-12-16 13:19:47.068502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.719 [2024-12-16 13:19:47.080207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.720 [2024-12-16 13:19:47.080290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:32.720 [2024-12-16 13:19:47.080330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.682 ms 00:15:32.720 [2024-12-16 13:19:47.080349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.720 [2024-12-16 13:19:47.089470] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:32.720 [2024-12-16 13:19:47.093802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.720 [2024-12-16 13:19:47.093885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:32.720 [2024-12-16 13:19:47.093926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.385 ms 00:15:32.720 [2024-12-16 13:19:47.093943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.720 [2024-12-16 13:19:47.177135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.720 [2024-12-16 13:19:47.177253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:32.720 [2024-12-16 13:19:47.177299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.160 ms 00:15:32.720 [2024-12-16 13:19:47.177316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.720 [2024-12-16 13:19:47.177353] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:32.720 [2024-12-16 13:19:47.177379] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:36.920 [2024-12-16 13:19:50.874425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.920 [2024-12-16 13:19:50.874622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:36.920 [2024-12-16 13:19:50.874690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3697.053 ms 00:15:36.920 [2024-12-16 13:19:50.874709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.920 [2024-12-16 13:19:50.874865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.920 [2024-12-16 13:19:50.874885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:36.920 [2024-12-16 13:19:50.874902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:15:36.920 [2024-12-16 13:19:50.874917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.920 [2024-12-16 13:19:50.893259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.920 [2024-12-16 13:19:50.893362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:36.920 [2024-12-16 13:19:50.893411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.299 ms 00:15:36.920 [2024-12-16 13:19:50.893432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.920 [2024-12-16 13:19:50.910941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.920 [2024-12-16 13:19:50.911031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:36.920 [2024-12-16 13:19:50.911080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.475 ms 00:15:36.920 [2024-12-16 13:19:50.911086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.920 [2024-12-16 13:19:50.911322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.920 [2024-12-16 13:19:50.911331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:36.920 [2024-12-16 13:19:50.911340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:15:36.920 [2024-12-16 13:19:50.911345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.920 [2024-12-16 13:19:50.961614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.921 [2024-12-16 13:19:50.961650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:36.921 [2024-12-16 13:19:50.961661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.244 ms 00:15:36.921 [2024-12-16 13:19:50.961667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.921 [2024-12-16 13:19:50.980829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.921 [2024-12-16 13:19:50.980855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:36.921 [2024-12-16 13:19:50.980865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.132 ms 00:15:36.921 [2024-12-16 13:19:50.980871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.921 [2024-12-16 13:19:50.981840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.921 [2024-12-16 13:19:50.981867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:36.921 [2024-12-16 13:19:50.981877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:15:36.921 [2024-12-16 13:19:50.981885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.921 [2024-12-16 13:19:50.999902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.921 [2024-12-16 13:19:51.000003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:36.921 [2024-12-16 13:19:51.000018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.984 ms 00:15:36.921 [2024-12-16 13:19:51.000023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.921 [2024-12-16 13:19:51.000049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.921 [2024-12-16 13:19:51.000056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:36.921 [2024-12-16 13:19:51.000065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:36.921 [2024-12-16 13:19:51.000071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.921 [2024-12-16 13:19:51.000136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.921 [2024-12-16 13:19:51.000143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:36.921 [2024-12-16 13:19:51.000151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:36.921 [2024-12-16 13:19:51.000156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.921 [2024-12-16 13:19:51.000817] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3996.966 ms, result 0 00:15:36.921 { 00:15:36.921 "name": "ftl0", 00:15:36.921 "uuid": "dc868303-a255-4efc-afb3-6df212e7a45d" 00:15:36.921 } 00:15:36.921 13:19:51 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:36.921 13:19:51 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:36.921 13:19:51 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:36.921 13:19:51 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:36.921 [2024-12-16 13:19:51.305013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:36.921 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:36.921 Zero copy mechanism will not be used. 00:15:36.921 Running I/O for 4 seconds... 00:15:41.196 00:15:41.196 Latency(us) 00:15:41.196 [2024-12-16T13:19:55.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.196 [2024-12-16T13:19:55.770Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:41.196 ftl0 : 4.00 944.35 62.71 0.00 0.00 1119.82 185.90 1726.62 00:15:41.196 [2024-12-16T13:19:55.770Z] =================================================================================================================== 00:15:41.196 [2024-12-16T13:19:55.770Z] Total : 944.35 62.71 0.00 0.00 1119.82 185.90 1726.62 00:15:41.196 0 00:15:41.196 [2024-12-16 13:19:55.311470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:41.196 13:19:55 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:41.196 [2024-12-16 13:19:55.408192] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:41.196 Running I/O for 4 seconds... 00:15:45.411 00:15:45.411 Latency(us) 00:15:45.411 [2024-12-16T13:19:59.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.411 [2024-12-16T13:19:59.985Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.411 ftl0 : 4.03 5811.73 22.70 0.00 0.00 21940.38 321.38 45371.08 00:15:45.411 [2024-12-16T13:19:59.985Z] =================================================================================================================== 00:15:45.411 [2024-12-16T13:19:59.985Z] Total : 5811.73 22.70 0.00 0.00 21940.38 0.00 45371.08 00:15:45.411 [2024-12-16 13:19:59.444123] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:15:45.411 l0 00:15:45.411 13:19:59 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:45.411 [2024-12-16 13:19:59.557963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:45.411 Running I/O for 4 seconds... 00:15:49.624 00:15:49.624 Latency(us) 00:15:49.624 [2024-12-16T13:20:04.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.624 [2024-12-16T13:20:04.198Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.624 Verification LBA range: start 0x0 length 0x1400000 00:15:49.624 ftl0 : 4.01 9082.27 35.48 0.00 0.00 14061.22 206.38 53638.70 00:15:49.624 [2024-12-16T13:20:04.198Z] =================================================================================================================== 00:15:49.624 [2024-12-16T13:20:04.198Z] Total : 9082.27 35.48 0.00 0.00 14061.22 0.00 53638.70 00:15:49.624 [2024-12-16 13:20:03.581377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:49.624 0 00:15:49.624 13:20:03 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:49.624 [2024-12-16 13:20:03.784459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:03.784518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:49.624 [2024-12-16 13:20:03.784535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:49.624 [2024-12-16 13:20:03.784543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:03.784568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:49.624 [2024-12-16 13:20:03.787441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:03.787492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:49.624 [2024-12-16 13:20:03.787503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:15:49.624 [2024-12-16 13:20:03.787517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:03.790830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:03.790989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:49.624 [2024-12-16 13:20:03.791010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:15:49.624 [2024-12-16 13:20:03.791020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:03.990764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:03.990955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:49.624 [2024-12-16 13:20:03.990982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 199.720 ms 00:15:49.624 [2024-12-16 13:20:03.990993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:03.997196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:03.997241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:49.624 [2024-12-16 13:20:03.997254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:15:49.624 [2024-12-16 13:20:03.997265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.023985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.024048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:49.624 [2024-12-16 13:20:04.024061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.647 ms 00:15:49.624 [2024-12-16 13:20:04.024074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.042796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.042847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:49.624 [2024-12-16 13:20:04.042860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.676 ms 00:15:49.624 [2024-12-16 13:20:04.042870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.043027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.043043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:49.624 [2024-12-16 13:20:04.043053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:15:49.624 [2024-12-16 13:20:04.043063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.069080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.069127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:49.624 [2024-12-16 13:20:04.069139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.001 ms 00:15:49.624 [2024-12-16 13:20:04.069149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.094804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.094851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:49.624 [2024-12-16 13:20:04.094863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.612 ms 00:15:49.624 [2024-12-16 13:20:04.094876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.119675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.119858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:49.624 [2024-12-16 13:20:04.119878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.757 ms 00:15:49.624 [2024-12-16 13:20:04.119888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.144541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.624 [2024-12-16 13:20:04.144592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:49.624 [2024-12-16 13:20:04.144604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.538 ms 00:15:49.624 [2024-12-16 13:20:04.144613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.624 [2024-12-16 13:20:04.144671] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:49.624 [2024-12-16 13:20:04.144691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:49.624 [2024-12-16 13:20:04.144702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:49.624 [2024-12-16 13:20:04.144713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.144998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:49.625 [2024-12-16 13:20:04.145524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:49.626 [2024-12-16 13:20:04.145532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:49.626 [2024-12-16 13:20:04.145542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:49.626 [2024-12-16 13:20:04.145549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:49.626 [2024-12-16 13:20:04.145558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:49.626 [2024-12-16 13:20:04.145565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:49.626 [2024-12-16 13:20:04.145585] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:49.626 [2024-12-16 13:20:04.145600] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dc868303-a255-4efc-afb3-6df212e7a45d 00:15:49.626 [2024-12-16 13:20:04.145614] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:49.626 [2024-12-16 13:20:04.145622] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:49.626 [2024-12-16 13:20:04.145647] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:49.626 [2024-12-16 13:20:04.145655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:49.626 [2024-12-16 13:20:04.145665] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:49.626 [2024-12-16 13:20:04.145676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:49.626 [2024-12-16 13:20:04.145686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:49.626 [2024-12-16 13:20:04.145692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:49.626 [2024-12-16 13:20:04.145701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:49.626 [2024-12-16 13:20:04.145708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.626 [2024-12-16 13:20:04.145717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:49.626 [2024-12-16 13:20:04.145726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:15:49.626 [2024-12-16 13:20:04.145736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.626 [2024-12-16 13:20:04.159178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.626 [2024-12-16 13:20:04.159346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:49.626 [2024-12-16 13:20:04.159364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.406 ms 00:15:49.626 [2024-12-16 13:20:04.159380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.626 [2024-12-16 13:20:04.159602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.626 [2024-12-16 13:20:04.159615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:49.626 [2024-12-16 13:20:04.159654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:15:49.626 [2024-12-16 13:20:04.159666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.201166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.201216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:49.887 [2024-12-16 13:20:04.201231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.201241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.201308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.201319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:49.887 [2024-12-16 13:20:04.201327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.201336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.201410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.201423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:49.887 [2024-12-16 13:20:04.201431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.201447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.201463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.201473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:49.887 [2024-12-16 13:20:04.201480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.201490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.281356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.281585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:49.887 [2024-12-16 13:20:04.281606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.281620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:49.887 [2024-12-16 13:20:04.313144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:49.887 [2024-12-16 13:20:04.313248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:49.887 [2024-12-16 13:20:04.313331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:49.887 [2024-12-16 13:20:04.313467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:49.887 [2024-12-16 13:20:04.313531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:49.887 [2024-12-16 13:20:04.313603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.887 [2024-12-16 13:20:04.313713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:49.887 [2024-12-16 13:20:04.313721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.887 [2024-12-16 13:20:04.313731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.887 [2024-12-16 13:20:04.313876] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.370 ms, result 0 00:15:49.887 true 00:15:49.887 13:20:04 -- ftl/bdevperf.sh@37 -- # killprocess 71333 00:15:49.887 13:20:04 -- common/autotest_common.sh@936 -- # '[' -z 71333 ']' 00:15:49.887 13:20:04 -- common/autotest_common.sh@940 -- # kill -0 71333 00:15:49.887 13:20:04 -- common/autotest_common.sh@941 -- # uname 00:15:49.887 13:20:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.887 13:20:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71333 00:15:49.887 killing process with pid 71333 00:15:49.887 Received shutdown signal, test time was about 4.000000 seconds 00:15:49.887 00:15:49.887 Latency(us) 00:15:49.887 [2024-12-16T13:20:04.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.887 [2024-12-16T13:20:04.461Z] =================================================================================================================== 00:15:49.887 [2024-12-16T13:20:04.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:49.887 13:20:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:49.887 13:20:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:49.887 13:20:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71333' 00:15:49.887 13:20:04 -- common/autotest_common.sh@955 -- # kill 71333 00:15:49.887 13:20:04 -- common/autotest_common.sh@960 -- # wait 71333 00:15:50.830 13:20:05 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:50.830 13:20:05 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:50.830 13:20:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.830 13:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 Remove shared memory files 00:15:50.830 13:20:05 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:50.830 13:20:05 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:50.830 13:20:05 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:50.830 13:20:05 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:50.830 13:20:05 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:50.830 13:20:05 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:50.830 13:20:05 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:50.830 ************************************ 00:15:50.830 END TEST ftl_bdevperf 00:15:50.830 ************************************ 00:15:50.830 00:15:50.830 real 0m21.905s 00:15:50.830 user 0m24.180s 00:15:50.830 sys 0m0.881s 00:15:50.830 13:20:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:50.831 13:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:50.831 13:20:05 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:50.831 13:20:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:50.831 13:20:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.831 13:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:50.831 ************************************ 00:15:50.831 START TEST ftl_trim 00:15:50.831 ************************************ 00:15:50.831 13:20:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:51.092 * Looking for test storage... 00:15:51.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:51.092 13:20:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.092 13:20:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.092 13:20:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.092 13:20:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.092 13:20:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.092 13:20:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.092 13:20:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.092 13:20:05 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.092 13:20:05 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.092 13:20:05 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.092 13:20:05 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.092 13:20:05 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.092 13:20:05 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.092 13:20:05 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.092 13:20:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.092 13:20:05 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.092 13:20:05 -- scripts/common.sh@344 -- # : 1 00:15:51.092 13:20:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.092 13:20:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.092 13:20:05 -- scripts/common.sh@364 -- # decimal 1 00:15:51.092 13:20:05 -- scripts/common.sh@352 -- # local d=1 00:15:51.092 13:20:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.092 13:20:05 -- scripts/common.sh@354 -- # echo 1 00:15:51.092 13:20:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.092 13:20:05 -- scripts/common.sh@365 -- # decimal 2 00:15:51.092 13:20:05 -- scripts/common.sh@352 -- # local d=2 00:15:51.092 13:20:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.092 13:20:05 -- scripts/common.sh@354 -- # echo 2 00:15:51.092 13:20:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.092 13:20:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.092 13:20:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.092 13:20:05 -- scripts/common.sh@367 -- # return 0 00:15:51.092 13:20:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.092 13:20:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.092 --rc genhtml_branch_coverage=1 00:15:51.092 --rc genhtml_function_coverage=1 00:15:51.092 --rc genhtml_legend=1 00:15:51.092 --rc geninfo_all_blocks=1 00:15:51.092 --rc geninfo_unexecuted_blocks=1 00:15:51.092 00:15:51.092 ' 00:15:51.092 13:20:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.092 --rc genhtml_branch_coverage=1 00:15:51.092 --rc genhtml_function_coverage=1 00:15:51.092 --rc genhtml_legend=1 00:15:51.092 --rc geninfo_all_blocks=1 00:15:51.092 --rc geninfo_unexecuted_blocks=1 00:15:51.092 00:15:51.092 ' 00:15:51.092 13:20:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.092 --rc genhtml_branch_coverage=1 00:15:51.092 --rc genhtml_function_coverage=1 00:15:51.092 --rc genhtml_legend=1 00:15:51.092 --rc geninfo_all_blocks=1 00:15:51.092 --rc geninfo_unexecuted_blocks=1 00:15:51.092 00:15:51.092 ' 00:15:51.092 13:20:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.092 --rc genhtml_branch_coverage=1 00:15:51.092 --rc genhtml_function_coverage=1 00:15:51.092 --rc genhtml_legend=1 00:15:51.092 --rc geninfo_all_blocks=1 00:15:51.092 --rc geninfo_unexecuted_blocks=1 00:15:51.092 00:15:51.092 ' 00:15:51.092 13:20:05 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:51.092 13:20:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:51.092 13:20:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:51.092 13:20:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:51.092 13:20:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:51.092 13:20:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:51.092 13:20:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.092 13:20:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:51.092 13:20:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:51.092 13:20:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:51.092 13:20:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:51.093 13:20:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:51.093 13:20:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:51.093 13:20:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:51.093 13:20:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:51.093 13:20:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:51.093 13:20:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:51.093 13:20:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:51.093 13:20:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:51.093 13:20:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:51.093 13:20:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:51.093 13:20:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:51.093 13:20:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:51.093 13:20:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:51.093 13:20:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:51.093 13:20:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:51.093 13:20:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:51.093 13:20:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:51.093 13:20:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:51.093 13:20:05 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.093 13:20:05 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:51.093 13:20:05 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:51.093 13:20:05 -- ftl/trim.sh@25 -- # timeout=240 00:15:51.093 13:20:05 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:51.093 13:20:05 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:51.093 13:20:05 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:51.093 13:20:05 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:51.093 13:20:05 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:51.093 13:20:05 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:51.093 13:20:05 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:51.093 13:20:05 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:51.093 13:20:05 -- ftl/trim.sh@40 -- # svcpid=71694 00:15:51.093 13:20:05 -- ftl/trim.sh@41 -- # waitforlisten 71694 00:15:51.093 13:20:05 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:51.093 13:20:05 -- common/autotest_common.sh@829 -- # '[' -z 71694 ']' 00:15:51.093 13:20:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.093 13:20:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.093 13:20:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.093 13:20:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.093 13:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.093 [2024-12-16 13:20:05.645720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.093 [2024-12-16 13:20:05.646096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71694 ] 00:15:51.354 [2024-12-16 13:20:05.802698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.616 [2024-12-16 13:20:06.026470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.616 [2024-12-16 13:20:06.027077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.616 [2024-12-16 13:20:06.027467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.616 [2024-12-16 13:20:06.027467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.001 13:20:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.001 13:20:07 -- common/autotest_common.sh@862 -- # return 0 00:15:53.001 13:20:07 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:53.001 13:20:07 -- ftl/common.sh@54 -- # local name=nvme0 00:15:53.001 13:20:07 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:53.001 13:20:07 -- ftl/common.sh@56 -- # local size=103424 00:15:53.001 13:20:07 -- ftl/common.sh@59 -- # local base_bdev 00:15:53.001 13:20:07 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:53.001 13:20:07 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:53.001 13:20:07 -- ftl/common.sh@62 -- # local base_size 00:15:53.001 13:20:07 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:53.001 13:20:07 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:53.001 13:20:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:53.001 13:20:07 -- common/autotest_common.sh@1369 -- # local bs 00:15:53.001 13:20:07 -- common/autotest_common.sh@1370 -- # local nb 00:15:53.001 13:20:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:53.262 13:20:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:53.262 { 00:15:53.262 "name": "nvme0n1", 00:15:53.262 "aliases": [ 00:15:53.262 "f2b3c6c6-3e40-4ee1-8beb-e6981a0fcaed" 00:15:53.262 ], 00:15:53.262 "product_name": "NVMe disk", 00:15:53.262 "block_size": 4096, 00:15:53.262 "num_blocks": 1310720, 00:15:53.262 "uuid": "f2b3c6c6-3e40-4ee1-8beb-e6981a0fcaed", 00:15:53.263 "assigned_rate_limits": { 00:15:53.263 "rw_ios_per_sec": 0, 00:15:53.263 "rw_mbytes_per_sec": 0, 00:15:53.263 "r_mbytes_per_sec": 0, 00:15:53.263 "w_mbytes_per_sec": 0 00:15:53.263 }, 00:15:53.263 "claimed": true, 00:15:53.263 "claim_type": "read_many_write_one", 00:15:53.263 "zoned": false, 00:15:53.263 "supported_io_types": { 00:15:53.263 "read": true, 00:15:53.263 "write": true, 00:15:53.263 "unmap": true, 00:15:53.263 "write_zeroes": true, 00:15:53.263 "flush": true, 00:15:53.263 "reset": true, 00:15:53.263 "compare": true, 00:15:53.263 "compare_and_write": false, 00:15:53.263 "abort": true, 00:15:53.263 "nvme_admin": true, 00:15:53.263 "nvme_io": true 00:15:53.263 }, 00:15:53.263 "driver_specific": { 00:15:53.263 "nvme": [ 00:15:53.263 { 00:15:53.263 "pci_address": "0000:00:07.0", 00:15:53.263 "trid": { 00:15:53.263 "trtype": "PCIe", 00:15:53.263 "traddr": "0000:00:07.0" 00:15:53.263 }, 00:15:53.263 "ctrlr_data": { 00:15:53.263 "cntlid": 0, 00:15:53.263 "vendor_id": "0x1b36", 00:15:53.263 "model_number": "QEMU NVMe Ctrl", 00:15:53.263 "serial_number": "12341", 00:15:53.263 "firmware_revision": "8.0.0", 00:15:53.263 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:53.263 "oacs": { 00:15:53.263 "security": 0, 00:15:53.263 "format": 1, 00:15:53.263 "firmware": 0, 00:15:53.263 "ns_manage": 1 00:15:53.263 }, 00:15:53.263 "multi_ctrlr": false, 00:15:53.263 "ana_reporting": false 00:15:53.263 }, 00:15:53.263 "vs": { 00:15:53.263 "nvme_version": "1.4" 00:15:53.263 }, 00:15:53.263 "ns_data": { 00:15:53.263 "id": 1, 00:15:53.263 "can_share": false 00:15:53.263 } 00:15:53.263 } 00:15:53.263 ], 00:15:53.263 "mp_policy": "active_passive" 00:15:53.263 } 00:15:53.263 } 00:15:53.263 ]' 00:15:53.263 13:20:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:53.263 13:20:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:53.263 13:20:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:53.263 13:20:07 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:53.263 13:20:07 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:53.263 13:20:07 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:53.263 13:20:07 -- ftl/common.sh@63 -- # base_size=5120 00:15:53.263 13:20:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:53.263 13:20:07 -- ftl/common.sh@67 -- # clear_lvols 00:15:53.263 13:20:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:53.263 13:20:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:53.525 13:20:07 -- ftl/common.sh@28 -- # stores=1bf5ab2e-fe97-4beb-966f-89384d5f82a8 00:15:53.525 13:20:07 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:53.525 13:20:07 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1bf5ab2e-fe97-4beb-966f-89384d5f82a8 00:15:53.786 13:20:08 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:54.046 13:20:08 -- ftl/common.sh@68 -- # lvs=a4a99c48-f792-4ecf-a215-4f269cef5c50 00:15:54.046 13:20:08 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a4a99c48-f792-4ecf-a215-4f269cef5c50 00:15:54.046 13:20:08 -- ftl/trim.sh@43 -- # split_bdev=83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.046 13:20:08 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.046 13:20:08 -- ftl/common.sh@35 -- # local name=nvc0 00:15:54.046 13:20:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:54.046 13:20:08 -- ftl/common.sh@37 -- # local base_bdev=83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.046 13:20:08 -- ftl/common.sh@38 -- # local cache_size= 00:15:54.046 13:20:08 -- ftl/common.sh@41 -- # get_bdev_size 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.046 13:20:08 -- common/autotest_common.sh@1367 -- # local bdev_name=83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.046 13:20:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:54.046 13:20:08 -- common/autotest_common.sh@1369 -- # local bs 00:15:54.046 13:20:08 -- common/autotest_common.sh@1370 -- # local nb 00:15:54.046 13:20:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.304 13:20:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:54.304 { 00:15:54.304 "name": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:54.304 "aliases": [ 00:15:54.304 "lvs/nvme0n1p0" 00:15:54.304 ], 00:15:54.304 "product_name": "Logical Volume", 00:15:54.304 "block_size": 4096, 00:15:54.304 "num_blocks": 26476544, 00:15:54.304 "uuid": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:54.304 "assigned_rate_limits": { 00:15:54.304 "rw_ios_per_sec": 0, 00:15:54.304 "rw_mbytes_per_sec": 0, 00:15:54.304 "r_mbytes_per_sec": 0, 00:15:54.304 "w_mbytes_per_sec": 0 00:15:54.304 }, 00:15:54.304 "claimed": false, 00:15:54.304 "zoned": false, 00:15:54.304 "supported_io_types": { 00:15:54.304 "read": true, 00:15:54.304 "write": true, 00:15:54.304 "unmap": true, 00:15:54.304 "write_zeroes": true, 00:15:54.304 "flush": false, 00:15:54.304 "reset": true, 00:15:54.304 "compare": false, 00:15:54.304 "compare_and_write": false, 00:15:54.304 "abort": false, 00:15:54.304 "nvme_admin": false, 00:15:54.304 "nvme_io": false 00:15:54.304 }, 00:15:54.304 "driver_specific": { 00:15:54.304 "lvol": { 00:15:54.304 "lvol_store_uuid": "a4a99c48-f792-4ecf-a215-4f269cef5c50", 00:15:54.304 "base_bdev": "nvme0n1", 00:15:54.304 "thin_provision": true, 00:15:54.304 "snapshot": false, 00:15:54.304 "clone": false, 00:15:54.304 "esnap_clone": false 00:15:54.304 } 00:15:54.304 } 00:15:54.304 } 00:15:54.304 ]' 00:15:54.304 13:20:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:54.304 13:20:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:54.304 13:20:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:54.304 13:20:08 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:54.304 13:20:08 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:54.304 13:20:08 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:54.304 13:20:08 -- ftl/common.sh@41 -- # local base_size=5171 00:15:54.304 13:20:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:54.304 13:20:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:54.563 13:20:09 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:54.563 13:20:09 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:54.563 13:20:09 -- ftl/common.sh@48 -- # get_bdev_size 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.563 13:20:09 -- common/autotest_common.sh@1367 -- # local bdev_name=83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.563 13:20:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:54.563 13:20:09 -- common/autotest_common.sh@1369 -- # local bs 00:15:54.563 13:20:09 -- common/autotest_common.sh@1370 -- # local nb 00:15:54.563 13:20:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:54.821 13:20:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:54.821 { 00:15:54.821 "name": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:54.821 "aliases": [ 00:15:54.821 "lvs/nvme0n1p0" 00:15:54.821 ], 00:15:54.821 "product_name": "Logical Volume", 00:15:54.821 "block_size": 4096, 00:15:54.821 "num_blocks": 26476544, 00:15:54.821 "uuid": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:54.821 "assigned_rate_limits": { 00:15:54.821 "rw_ios_per_sec": 0, 00:15:54.821 "rw_mbytes_per_sec": 0, 00:15:54.821 "r_mbytes_per_sec": 0, 00:15:54.821 "w_mbytes_per_sec": 0 00:15:54.821 }, 00:15:54.821 "claimed": false, 00:15:54.821 "zoned": false, 00:15:54.821 "supported_io_types": { 00:15:54.821 "read": true, 00:15:54.821 "write": true, 00:15:54.821 "unmap": true, 00:15:54.821 "write_zeroes": true, 00:15:54.821 "flush": false, 00:15:54.821 "reset": true, 00:15:54.821 "compare": false, 00:15:54.821 "compare_and_write": false, 00:15:54.821 "abort": false, 00:15:54.821 "nvme_admin": false, 00:15:54.821 "nvme_io": false 00:15:54.821 }, 00:15:54.821 "driver_specific": { 00:15:54.821 "lvol": { 00:15:54.821 "lvol_store_uuid": "a4a99c48-f792-4ecf-a215-4f269cef5c50", 00:15:54.821 "base_bdev": "nvme0n1", 00:15:54.821 "thin_provision": true, 00:15:54.821 "snapshot": false, 00:15:54.821 "clone": false, 00:15:54.821 "esnap_clone": false 00:15:54.821 } 00:15:54.821 } 00:15:54.821 } 00:15:54.821 ]' 00:15:54.821 13:20:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:54.821 13:20:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:54.821 13:20:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:54.821 13:20:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:54.821 13:20:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:54.821 13:20:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:54.821 13:20:09 -- ftl/common.sh@48 -- # cache_size=5171 00:15:54.821 13:20:09 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:55.079 13:20:09 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:55.079 13:20:09 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:55.079 13:20:09 -- ftl/trim.sh@47 -- # get_bdev_size 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:55.079 13:20:09 -- common/autotest_common.sh@1367 -- # local bdev_name=83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:55.079 13:20:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:55.079 13:20:09 -- common/autotest_common.sh@1369 -- # local bs 00:15:55.079 13:20:09 -- common/autotest_common.sh@1370 -- # local nb 00:15:55.079 13:20:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 83aee627-dcf1-4fe7-8aeb-272f01d115f7 00:15:55.338 13:20:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:55.338 { 00:15:55.338 "name": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:55.338 "aliases": [ 00:15:55.338 "lvs/nvme0n1p0" 00:15:55.338 ], 00:15:55.338 "product_name": "Logical Volume", 00:15:55.338 "block_size": 4096, 00:15:55.338 "num_blocks": 26476544, 00:15:55.338 "uuid": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:55.338 "assigned_rate_limits": { 00:15:55.338 "rw_ios_per_sec": 0, 00:15:55.338 "rw_mbytes_per_sec": 0, 00:15:55.338 "r_mbytes_per_sec": 0, 00:15:55.338 "w_mbytes_per_sec": 0 00:15:55.338 }, 00:15:55.338 "claimed": false, 00:15:55.338 "zoned": false, 00:15:55.338 "supported_io_types": { 00:15:55.338 "read": true, 00:15:55.338 "write": true, 00:15:55.338 "unmap": true, 00:15:55.338 "write_zeroes": true, 00:15:55.338 "flush": false, 00:15:55.338 "reset": true, 00:15:55.338 "compare": false, 00:15:55.338 "compare_and_write": false, 00:15:55.338 "abort": false, 00:15:55.338 "nvme_admin": false, 00:15:55.338 "nvme_io": false 00:15:55.338 }, 00:15:55.338 "driver_specific": { 00:15:55.338 "lvol": { 00:15:55.338 "lvol_store_uuid": "a4a99c48-f792-4ecf-a215-4f269cef5c50", 00:15:55.338 "base_bdev": "nvme0n1", 00:15:55.338 "thin_provision": true, 00:15:55.338 "snapshot": false, 00:15:55.338 "clone": false, 00:15:55.339 "esnap_clone": false 00:15:55.339 } 00:15:55.339 } 00:15:55.339 } 00:15:55.339 ]' 00:15:55.339 13:20:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:55.339 13:20:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:55.339 13:20:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:55.339 13:20:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:55.339 13:20:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:55.339 13:20:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:55.339 13:20:09 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:55.339 13:20:09 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 83aee627-dcf1-4fe7-8aeb-272f01d115f7 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:55.597 [2024-12-16 13:20:09.943998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.597 [2024-12-16 13:20:09.944122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:55.597 [2024-12-16 13:20:09.944174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:55.597 [2024-12-16 13:20:09.944194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.597 [2024-12-16 13:20:09.946406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.597 [2024-12-16 13:20:09.946499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:55.597 [2024-12-16 13:20:09.946574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:15:55.597 [2024-12-16 13:20:09.946592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.597 [2024-12-16 13:20:09.946681] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:55.597 [2024-12-16 13:20:09.947283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:55.597 [2024-12-16 13:20:09.947368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.597 [2024-12-16 13:20:09.947415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:55.597 [2024-12-16 13:20:09.947436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:15:55.597 [2024-12-16 13:20:09.947451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.597 [2024-12-16 13:20:09.947565] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8943dfba-ecda-4808-8e9b-139434979057 00:15:55.597 [2024-12-16 13:20:09.948779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.597 [2024-12-16 13:20:09.948862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:55.597 [2024-12-16 13:20:09.948874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:55.598 [2024-12-16 13:20:09.948881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.954032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.954057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:55.598 [2024-12-16 13:20:09.954064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.074 ms 00:15:55.598 [2024-12-16 13:20:09.954071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.954168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.954178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:55.598 [2024-12-16 13:20:09.954184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:55.598 [2024-12-16 13:20:09.954194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.954228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.954236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:55.598 [2024-12-16 13:20:09.954242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:55.598 [2024-12-16 13:20:09.954249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.954289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:55.598 [2024-12-16 13:20:09.957284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.957308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:55.598 [2024-12-16 13:20:09.957317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:15:55.598 [2024-12-16 13:20:09.957323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.957373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.957380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:55.598 [2024-12-16 13:20:09.957388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:55.598 [2024-12-16 13:20:09.957393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.957421] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:55.598 [2024-12-16 13:20:09.957503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:55.598 [2024-12-16 13:20:09.957515] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:55.598 [2024-12-16 13:20:09.957524] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:55.598 [2024-12-16 13:20:09.957532] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957539] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957549] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:55.598 [2024-12-16 13:20:09.957554] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:55.598 [2024-12-16 13:20:09.957561] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:55.598 [2024-12-16 13:20:09.957567] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:55.598 [2024-12-16 13:20:09.957574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.957579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:55.598 [2024-12-16 13:20:09.957586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:15:55.598 [2024-12-16 13:20:09.957591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.957680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.957687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:55.598 [2024-12-16 13:20:09.957697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:55.598 [2024-12-16 13:20:09.957703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.957783] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:55.598 [2024-12-16 13:20:09.957790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:55.598 [2024-12-16 13:20:09.957798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:55.598 [2024-12-16 13:20:09.957816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:55.598 [2024-12-16 13:20:09.957834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.598 [2024-12-16 13:20:09.957845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:55.598 [2024-12-16 13:20:09.957850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:55.598 [2024-12-16 13:20:09.957856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.598 [2024-12-16 13:20:09.957862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:55.598 [2024-12-16 13:20:09.957869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:55.598 [2024-12-16 13:20:09.957875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:55.598 [2024-12-16 13:20:09.957887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:55.598 [2024-12-16 13:20:09.957893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:55.598 [2024-12-16 13:20:09.957905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:55.598 [2024-12-16 13:20:09.957910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:55.598 [2024-12-16 13:20:09.957922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:55.598 [2024-12-16 13:20:09.957939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:55.598 [2024-12-16 13:20:09.957956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:55.598 [2024-12-16 13:20:09.957974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.598 [2024-12-16 13:20:09.957985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:55.598 [2024-12-16 13:20:09.957990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:55.598 [2024-12-16 13:20:09.957996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.598 [2024-12-16 13:20:09.958001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:55.598 [2024-12-16 13:20:09.958007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:55.598 [2024-12-16 13:20:09.958012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.598 [2024-12-16 13:20:09.958020] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:55.598 [2024-12-16 13:20:09.958025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:55.598 [2024-12-16 13:20:09.958032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.598 [2024-12-16 13:20:09.958038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.598 [2024-12-16 13:20:09.958047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:55.598 [2024-12-16 13:20:09.958052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:55.598 [2024-12-16 13:20:09.958058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:55.598 [2024-12-16 13:20:09.958063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:55.598 [2024-12-16 13:20:09.958071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:55.598 [2024-12-16 13:20:09.958076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:55.598 [2024-12-16 13:20:09.958083] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:55.598 [2024-12-16 13:20:09.958090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.598 [2024-12-16 13:20:09.958099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:55.598 [2024-12-16 13:20:09.958105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:55.598 [2024-12-16 13:20:09.958112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:55.598 [2024-12-16 13:20:09.958118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:55.598 [2024-12-16 13:20:09.958125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:55.598 [2024-12-16 13:20:09.958130] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:55.598 [2024-12-16 13:20:09.958137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:55.598 [2024-12-16 13:20:09.958143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:55.598 [2024-12-16 13:20:09.958150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:55.598 [2024-12-16 13:20:09.958156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:55.598 [2024-12-16 13:20:09.958163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:55.598 [2024-12-16 13:20:09.958169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:55.598 [2024-12-16 13:20:09.958178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:55.598 [2024-12-16 13:20:09.958184] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:55.598 [2024-12-16 13:20:09.958191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.598 [2024-12-16 13:20:09.958198] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:55.598 [2024-12-16 13:20:09.958205] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:55.598 [2024-12-16 13:20:09.958211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:55.598 [2024-12-16 13:20:09.958218] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:55.598 [2024-12-16 13:20:09.958224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.958230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:55.598 [2024-12-16 13:20:09.958236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:15:55.598 [2024-12-16 13:20:09.958243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.598 [2024-12-16 13:20:09.970462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.598 [2024-12-16 13:20:09.970494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:55.599 [2024-12-16 13:20:09.970502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.134 ms 00:15:55.599 [2024-12-16 13:20:09.970509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:09.970606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:09.970616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:55.599 [2024-12-16 13:20:09.970624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:55.599 [2024-12-16 13:20:09.970645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:09.996071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:09.996107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:55.599 [2024-12-16 13:20:09.996116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.396 ms 00:15:55.599 [2024-12-16 13:20:09.996124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:09.996181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:09.996189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:55.599 [2024-12-16 13:20:09.996196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:55.599 [2024-12-16 13:20:09.996205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:09.996510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:09.996550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:55.599 [2024-12-16 13:20:09.996558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:15:55.599 [2024-12-16 13:20:09.996565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:09.996669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:09.996679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:55.599 [2024-12-16 13:20:09.996686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:15:55.599 [2024-12-16 13:20:09.996692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:10.019378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:10.019428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:55.599 [2024-12-16 13:20:10.019444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.659 ms 00:15:55.599 [2024-12-16 13:20:10.019458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:10.031995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:55.599 [2024-12-16 13:20:10.044975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:10.045004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:55.599 [2024-12-16 13:20:10.045015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.366 ms 00:15:55.599 [2024-12-16 13:20:10.045022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:10.125065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.599 [2024-12-16 13:20:10.125221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:55.599 [2024-12-16 13:20:10.125239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.975 ms 00:15:55.599 [2024-12-16 13:20:10.125246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.599 [2024-12-16 13:20:10.125309] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:55.599 [2024-12-16 13:20:10.125320] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:58.129 [2024-12-16 13:20:12.660649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.129 [2024-12-16 13:20:12.660694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:58.129 [2024-12-16 13:20:12.660709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2535.329 ms 00:15:58.129 [2024-12-16 13:20:12.660715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.129 [2024-12-16 13:20:12.660880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.129 [2024-12-16 13:20:12.660891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:58.129 [2024-12-16 13:20:12.660900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:15:58.129 [2024-12-16 13:20:12.660906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.129 [2024-12-16 13:20:12.679238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.129 [2024-12-16 13:20:12.679264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:58.129 [2024-12-16 13:20:12.679276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.301 ms 00:15:58.129 [2024-12-16 13:20:12.679282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.129 [2024-12-16 13:20:12.696374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.129 [2024-12-16 13:20:12.696398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:58.129 [2024-12-16 13:20:12.696410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:15:58.129 [2024-12-16 13:20:12.696415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.129 [2024-12-16 13:20:12.696681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.129 [2024-12-16 13:20:12.696689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:58.129 [2024-12-16 13:20:12.696696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:15:58.129 [2024-12-16 13:20:12.696703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.747566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.387 [2024-12-16 13:20:12.747592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:58.387 [2024-12-16 13:20:12.747602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.836 ms 00:15:58.387 [2024-12-16 13:20:12.747608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.766139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.387 [2024-12-16 13:20:12.766257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:58.387 [2024-12-16 13:20:12.766274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.464 ms 00:15:58.387 [2024-12-16 13:20:12.766280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.769786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.387 [2024-12-16 13:20:12.769814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:58.387 [2024-12-16 13:20:12.769825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:15:58.387 [2024-12-16 13:20:12.769831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.787834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.387 [2024-12-16 13:20:12.787861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:58.387 [2024-12-16 13:20:12.787870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.969 ms 00:15:58.387 [2024-12-16 13:20:12.787876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.787927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.387 [2024-12-16 13:20:12.787935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:58.387 [2024-12-16 13:20:12.787943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:58.387 [2024-12-16 13:20:12.787948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.788017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.387 [2024-12-16 13:20:12.788034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:58.387 [2024-12-16 13:20:12.788042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:58.387 [2024-12-16 13:20:12.788048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.387 [2024-12-16 13:20:12.788722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:58.387 [2024-12-16 13:20:12.791102] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2844.468 ms, result 0 00:15:58.387 [2024-12-16 13:20:12.791918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:15:58.387 "name": "ftl0", 00:15:58.387 "uuid": "8943dfba-ecda-4808-8e9b-139434979057" 00:15:58.387 } 00:15:58.387 p_thread 00:15:58.387 13:20:12 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:58.387 13:20:12 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:58.387 13:20:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:58.387 13:20:12 -- common/autotest_common.sh@899 -- # local i 00:15:58.387 13:20:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:58.387 13:20:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:58.387 13:20:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:58.645 13:20:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:58.645 [ 00:15:58.645 { 00:15:58.645 "name": "ftl0", 00:15:58.645 "aliases": [ 00:15:58.645 "8943dfba-ecda-4808-8e9b-139434979057" 00:15:58.645 ], 00:15:58.645 "product_name": "FTL disk", 00:15:58.645 "block_size": 4096, 00:15:58.645 "num_blocks": 23592960, 00:15:58.645 "uuid": "8943dfba-ecda-4808-8e9b-139434979057", 00:15:58.645 "assigned_rate_limits": { 00:15:58.645 "rw_ios_per_sec": 0, 00:15:58.645 "rw_mbytes_per_sec": 0, 00:15:58.645 "r_mbytes_per_sec": 0, 00:15:58.645 "w_mbytes_per_sec": 0 00:15:58.645 }, 00:15:58.645 "claimed": false, 00:15:58.645 "zoned": false, 00:15:58.645 "supported_io_types": { 00:15:58.645 "read": true, 00:15:58.645 "write": true, 00:15:58.645 "unmap": true, 00:15:58.645 "write_zeroes": true, 00:15:58.645 "flush": true, 00:15:58.645 "reset": false, 00:15:58.645 "compare": false, 00:15:58.645 "compare_and_write": false, 00:15:58.645 "abort": false, 00:15:58.645 "nvme_admin": false, 00:15:58.645 "nvme_io": false 00:15:58.645 }, 00:15:58.645 "driver_specific": { 00:15:58.645 "ftl": { 00:15:58.645 "base_bdev": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:58.645 "cache": "nvc0n1p0" 00:15:58.645 } 00:15:58.645 } 00:15:58.645 } 00:15:58.645 ] 00:15:58.645 13:20:13 -- common/autotest_common.sh@905 -- # return 0 00:15:58.645 13:20:13 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:58.645 13:20:13 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:58.903 13:20:13 -- ftl/trim.sh@56 -- # echo ']}' 00:15:58.903 13:20:13 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:59.161 13:20:13 -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:59.161 { 00:15:59.161 "name": "ftl0", 00:15:59.161 "aliases": [ 00:15:59.161 "8943dfba-ecda-4808-8e9b-139434979057" 00:15:59.161 ], 00:15:59.161 "product_name": "FTL disk", 00:15:59.161 "block_size": 4096, 00:15:59.161 "num_blocks": 23592960, 00:15:59.161 "uuid": "8943dfba-ecda-4808-8e9b-139434979057", 00:15:59.161 "assigned_rate_limits": { 00:15:59.161 "rw_ios_per_sec": 0, 00:15:59.161 "rw_mbytes_per_sec": 0, 00:15:59.161 "r_mbytes_per_sec": 0, 00:15:59.161 "w_mbytes_per_sec": 0 00:15:59.161 }, 00:15:59.161 "claimed": false, 00:15:59.161 "zoned": false, 00:15:59.161 "supported_io_types": { 00:15:59.161 "read": true, 00:15:59.161 "write": true, 00:15:59.161 "unmap": true, 00:15:59.161 "write_zeroes": true, 00:15:59.161 "flush": true, 00:15:59.161 "reset": false, 00:15:59.161 "compare": false, 00:15:59.161 "compare_and_write": false, 00:15:59.161 "abort": false, 00:15:59.161 "nvme_admin": false, 00:15:59.161 "nvme_io": false 00:15:59.161 }, 00:15:59.161 "driver_specific": { 00:15:59.161 "ftl": { 00:15:59.161 "base_bdev": "83aee627-dcf1-4fe7-8aeb-272f01d115f7", 00:15:59.161 "cache": "nvc0n1p0" 00:15:59.161 } 00:15:59.161 } 00:15:59.161 } 00:15:59.161 ]' 00:15:59.161 13:20:13 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:59.161 13:20:13 -- ftl/trim.sh@60 -- # nb=23592960 00:15:59.161 13:20:13 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:59.421 [2024-12-16 13:20:13.757527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.757561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:59.421 [2024-12-16 13:20:13.757571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:59.421 [2024-12-16 13:20:13.757578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.757609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:59.421 [2024-12-16 13:20:13.759623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.759652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:59.421 [2024-12-16 13:20:13.759664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:15:59.421 [2024-12-16 13:20:13.759671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.760205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.760221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:59.421 [2024-12-16 13:20:13.760231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:15:59.421 [2024-12-16 13:20:13.760244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.763021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.763107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:59.421 [2024-12-16 13:20:13.763123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:15:59.421 [2024-12-16 13:20:13.763129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.768369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.768391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:59.421 [2024-12-16 13:20:13.768401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.186 ms 00:15:59.421 [2024-12-16 13:20:13.768408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.786741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.786765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:59.421 [2024-12-16 13:20:13.786776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.255 ms 00:15:59.421 [2024-12-16 13:20:13.786781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.799328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.799360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:59.421 [2024-12-16 13:20:13.799371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.488 ms 00:15:59.421 [2024-12-16 13:20:13.799378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.799553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.799561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:59.421 [2024-12-16 13:20:13.799573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:15:59.421 [2024-12-16 13:20:13.799578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.817562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.817587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:59.421 [2024-12-16 13:20:13.817596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.952 ms 00:15:59.421 [2024-12-16 13:20:13.817602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.835341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.835365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:59.421 [2024-12-16 13:20:13.835374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.670 ms 00:15:59.421 [2024-12-16 13:20:13.835379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.852732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.852755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:59.421 [2024-12-16 13:20:13.852765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.304 ms 00:15:59.421 [2024-12-16 13:20:13.852770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.869893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.421 [2024-12-16 13:20:13.869988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:59.421 [2024-12-16 13:20:13.870005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.034 ms 00:15:59.421 [2024-12-16 13:20:13.870010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.421 [2024-12-16 13:20:13.870056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:59.421 [2024-12-16 13:20:13.870067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:59.421 [2024-12-16 13:20:13.870296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:59.422 [2024-12-16 13:20:13.870737] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:59.422 [2024-12-16 13:20:13.870744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:15:59.422 [2024-12-16 13:20:13.870756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:59.422 [2024-12-16 13:20:13.870763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:59.422 [2024-12-16 13:20:13.870768] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:59.422 [2024-12-16 13:20:13.870775] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:59.422 [2024-12-16 13:20:13.870780] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:59.422 [2024-12-16 13:20:13.870787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:59.422 [2024-12-16 13:20:13.870792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:59.422 [2024-12-16 13:20:13.870799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:59.422 [2024-12-16 13:20:13.870804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:59.422 [2024-12-16 13:20:13.870811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.422 [2024-12-16 13:20:13.870818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:59.422 [2024-12-16 13:20:13.870826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:15:59.422 [2024-12-16 13:20:13.870831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.422 [2024-12-16 13:20:13.880146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.422 [2024-12-16 13:20:13.880170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:59.422 [2024-12-16 13:20:13.880179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.282 ms 00:15:59.422 [2024-12-16 13:20:13.880184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.422 [2024-12-16 13:20:13.880376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.422 [2024-12-16 13:20:13.880383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:59.422 [2024-12-16 13:20:13.880390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:15:59.422 [2024-12-16 13:20:13.880395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.422 [2024-12-16 13:20:13.915206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.422 [2024-12-16 13:20:13.915234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:59.422 [2024-12-16 13:20:13.915246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.422 [2024-12-16 13:20:13.915253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.422 [2024-12-16 13:20:13.915328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.422 [2024-12-16 13:20:13.915335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:59.422 [2024-12-16 13:20:13.915343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.422 [2024-12-16 13:20:13.915348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.422 [2024-12-16 13:20:13.915402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.422 [2024-12-16 13:20:13.915409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:59.423 [2024-12-16 13:20:13.915417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.423 [2024-12-16 13:20:13.915422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.423 [2024-12-16 13:20:13.915447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.423 [2024-12-16 13:20:13.915454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:59.423 [2024-12-16 13:20:13.915462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.423 [2024-12-16 13:20:13.915467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.423 [2024-12-16 13:20:13.981230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.423 [2024-12-16 13:20:13.981265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:59.423 [2024-12-16 13:20:13.981277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.423 [2024-12-16 13:20:13.981284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.681 [2024-12-16 13:20:14.003471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.681 [2024-12-16 13:20:14.003586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:59.681 [2024-12-16 13:20:14.003601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.681 [2024-12-16 13:20:14.003607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.681 [2024-12-16 13:20:14.003682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.681 [2024-12-16 13:20:14.003691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:59.681 [2024-12-16 13:20:14.003699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.682 [2024-12-16 13:20:14.003704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.682 [2024-12-16 13:20:14.003764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.682 [2024-12-16 13:20:14.003770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:59.682 [2024-12-16 13:20:14.003782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.682 [2024-12-16 13:20:14.003799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.682 [2024-12-16 13:20:14.003889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.682 [2024-12-16 13:20:14.003898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:59.682 [2024-12-16 13:20:14.003907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.682 [2024-12-16 13:20:14.003912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.682 [2024-12-16 13:20:14.003952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.682 [2024-12-16 13:20:14.003958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:59.682 [2024-12-16 13:20:14.003968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.682 [2024-12-16 13:20:14.003973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.682 [2024-12-16 13:20:14.004023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.682 [2024-12-16 13:20:14.004031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:59.682 [2024-12-16 13:20:14.004038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.682 [2024-12-16 13:20:14.004043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.682 [2024-12-16 13:20:14.004097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:59.682 [2024-12-16 13:20:14.004104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:59.682 [2024-12-16 13:20:14.004113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:59.682 [2024-12-16 13:20:14.004118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.682 [2024-12-16 13:20:14.004289] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 246.724 ms, result 0 00:15:59.682 true 00:15:59.682 13:20:14 -- ftl/trim.sh@63 -- # killprocess 71694 00:15:59.682 13:20:14 -- common/autotest_common.sh@936 -- # '[' -z 71694 ']' 00:15:59.682 13:20:14 -- common/autotest_common.sh@940 -- # kill -0 71694 00:15:59.682 13:20:14 -- common/autotest_common.sh@941 -- # uname 00:15:59.682 13:20:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.682 13:20:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71694 00:15:59.682 killing process with pid 71694 00:15:59.682 13:20:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.682 13:20:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.682 13:20:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71694' 00:15:59.682 13:20:14 -- common/autotest_common.sh@955 -- # kill 71694 00:15:59.682 13:20:14 -- common/autotest_common.sh@960 -- # wait 71694 00:16:04.948 13:20:19 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:05.883 65536+0 records in 00:16:05.883 65536+0 records out 00:16:05.883 268435456 bytes (268 MB, 256 MiB) copied, 0.803797 s, 334 MB/s 00:16:05.883 13:20:20 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:05.883 [2024-12-16 13:20:20.246674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:05.883 [2024-12-16 13:20:20.246794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71897 ] 00:16:05.883 [2024-12-16 13:20:20.394909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.142 [2024-12-16 13:20:20.532376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.401 [2024-12-16 13:20:20.736076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:06.402 [2024-12-16 13:20:20.736127] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:06.402 [2024-12-16 13:20:20.882757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.882793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:06.402 [2024-12-16 13:20:20.882803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.402 [2024-12-16 13:20:20.882809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.885133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.885170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:06.402 [2024-12-16 13:20:20.885179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:16:06.402 [2024-12-16 13:20:20.885185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.885260] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:06.402 [2024-12-16 13:20:20.885842] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:06.402 [2024-12-16 13:20:20.885944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.885954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:06.402 [2024-12-16 13:20:20.885961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:16:06.402 [2024-12-16 13:20:20.885966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.886977] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:06.402 [2024-12-16 13:20:20.896716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.897948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:06.402 [2024-12-16 13:20:20.897963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.740 ms 00:16:06.402 [2024-12-16 13:20:20.897969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.898043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.898051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:06.402 [2024-12-16 13:20:20.898057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:06.402 [2024-12-16 13:20:20.898063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.902335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.902360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:06.402 [2024-12-16 13:20:20.902367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.242 ms 00:16:06.402 [2024-12-16 13:20:20.902376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.902450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.902458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:06.402 [2024-12-16 13:20:20.902465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:06.402 [2024-12-16 13:20:20.902470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.902487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.902492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:06.402 [2024-12-16 13:20:20.902498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:06.402 [2024-12-16 13:20:20.902503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.902527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:06.402 [2024-12-16 13:20:20.905281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.905375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:06.402 [2024-12-16 13:20:20.905387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:16:06.402 [2024-12-16 13:20:20.905396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.905427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.905433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:06.402 [2024-12-16 13:20:20.905439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:06.402 [2024-12-16 13:20:20.905444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.905458] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:06.402 [2024-12-16 13:20:20.905472] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:06.402 [2024-12-16 13:20:20.905499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:06.402 [2024-12-16 13:20:20.905512] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:06.402 [2024-12-16 13:20:20.905568] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:06.402 [2024-12-16 13:20:20.905576] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:06.402 [2024-12-16 13:20:20.905583] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:06.402 [2024-12-16 13:20:20.905591] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905598] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905603] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:06.402 [2024-12-16 13:20:20.905609] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:06.402 [2024-12-16 13:20:20.905614] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:06.402 [2024-12-16 13:20:20.905621] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:06.402 [2024-12-16 13:20:20.905638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.905645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:06.402 [2024-12-16 13:20:20.905651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:16:06.402 [2024-12-16 13:20:20.905656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.905707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.402 [2024-12-16 13:20:20.905713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:06.402 [2024-12-16 13:20:20.905719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:06.402 [2024-12-16 13:20:20.905724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.402 [2024-12-16 13:20:20.905780] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:06.402 [2024-12-16 13:20:20.905787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:06.402 [2024-12-16 13:20:20.905792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:06.402 [2024-12-16 13:20:20.905808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:06.402 [2024-12-16 13:20:20.905825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.402 [2024-12-16 13:20:20.905834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:06.402 [2024-12-16 13:20:20.905839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:06.402 [2024-12-16 13:20:20.905845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.402 [2024-12-16 13:20:20.905850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:06.402 [2024-12-16 13:20:20.905859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:06.402 [2024-12-16 13:20:20.905864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:06.402 [2024-12-16 13:20:20.905874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:06.402 [2024-12-16 13:20:20.905879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:06.402 [2024-12-16 13:20:20.905888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:06.402 [2024-12-16 13:20:20.905893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:06.402 [2024-12-16 13:20:20.905903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:06.402 [2024-12-16 13:20:20.905918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:06.402 [2024-12-16 13:20:20.905932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:06.402 [2024-12-16 13:20:20.905946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.402 [2024-12-16 13:20:20.905955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:06.402 [2024-12-16 13:20:20.905960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:06.402 [2024-12-16 13:20:20.905965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.402 [2024-12-16 13:20:20.905970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:06.402 [2024-12-16 13:20:20.905974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:06.402 [2024-12-16 13:20:20.905979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.403 [2024-12-16 13:20:20.905984] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:06.403 [2024-12-16 13:20:20.905989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:06.403 [2024-12-16 13:20:20.905994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.403 [2024-12-16 13:20:20.906001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.403 [2024-12-16 13:20:20.906010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:06.403 [2024-12-16 13:20:20.906015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:06.403 [2024-12-16 13:20:20.906020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:06.403 [2024-12-16 13:20:20.906024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:06.403 [2024-12-16 13:20:20.906029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:06.403 [2024-12-16 13:20:20.906034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:06.403 [2024-12-16 13:20:20.906040] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:06.403 [2024-12-16 13:20:20.906047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.403 [2024-12-16 13:20:20.906053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:06.403 [2024-12-16 13:20:20.906059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:06.403 [2024-12-16 13:20:20.906065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:06.403 [2024-12-16 13:20:20.906070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:06.403 [2024-12-16 13:20:20.906075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:06.403 [2024-12-16 13:20:20.906080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:06.403 [2024-12-16 13:20:20.906085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:06.403 [2024-12-16 13:20:20.906090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:06.403 [2024-12-16 13:20:20.906096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:06.403 [2024-12-16 13:20:20.906101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:06.403 [2024-12-16 13:20:20.906106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:06.403 [2024-12-16 13:20:20.906112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:06.403 [2024-12-16 13:20:20.906118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:06.403 [2024-12-16 13:20:20.906123] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:06.403 [2024-12-16 13:20:20.906133] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.403 [2024-12-16 13:20:20.906139] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:06.403 [2024-12-16 13:20:20.906144] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:06.403 [2024-12-16 13:20:20.906150] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:06.403 [2024-12-16 13:20:20.906155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:06.403 [2024-12-16 13:20:20.906161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.906166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:06.403 [2024-12-16 13:20:20.906172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:16:06.403 [2024-12-16 13:20:20.906177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.917946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.917973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:06.403 [2024-12-16 13:20:20.917981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.736 ms 00:16:06.403 [2024-12-16 13:20:20.917987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.918074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.918082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:06.403 [2024-12-16 13:20:20.918088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:06.403 [2024-12-16 13:20:20.918094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.954640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.954750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:06.403 [2024-12-16 13:20:20.954764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.529 ms 00:16:06.403 [2024-12-16 13:20:20.954771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.954829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.954838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:06.403 [2024-12-16 13:20:20.954848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:06.403 [2024-12-16 13:20:20.954853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.955135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.955147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:06.403 [2024-12-16 13:20:20.955154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:16:06.403 [2024-12-16 13:20:20.955159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.955252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.955258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:06.403 [2024-12-16 13:20:20.955265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:06.403 [2024-12-16 13:20:20.955270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.403 [2024-12-16 13:20:20.966445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.403 [2024-12-16 13:20:20.966542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:06.403 [2024-12-16 13:20:20.966554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.158 ms 00:16:06.403 [2024-12-16 13:20:20.966562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:20.976557] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:06.664 [2024-12-16 13:20:20.976585] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:06.664 [2024-12-16 13:20:20.976594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:20.976600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:06.664 [2024-12-16 13:20:20.976607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.936 ms 00:16:06.664 [2024-12-16 13:20:20.976612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:20.995436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:20.995462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:06.664 [2024-12-16 13:20:20.995474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.763 ms 00:16:06.664 [2024-12-16 13:20:20.995480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.004711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.004734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:06.664 [2024-12-16 13:20:21.004741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.179 ms 00:16:06.664 [2024-12-16 13:20:21.004751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.013465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.013488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:06.664 [2024-12-16 13:20:21.013495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.674 ms 00:16:06.664 [2024-12-16 13:20:21.013501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.013788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.013798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:06.664 [2024-12-16 13:20:21.013804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:06.664 [2024-12-16 13:20:21.013809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.060043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.060072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:06.664 [2024-12-16 13:20:21.060081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.217 ms 00:16:06.664 [2024-12-16 13:20:21.060087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.067900] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:06.664 [2024-12-16 13:20:21.079165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.079281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:06.664 [2024-12-16 13:20:21.079293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.018 ms 00:16:06.664 [2024-12-16 13:20:21.079300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.079352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.079360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:06.664 [2024-12-16 13:20:21.079366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:06.664 [2024-12-16 13:20:21.079375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.079410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.664 [2024-12-16 13:20:21.079419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:06.664 [2024-12-16 13:20:21.079425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:06.664 [2024-12-16 13:20:21.079431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.664 [2024-12-16 13:20:21.080366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.665 [2024-12-16 13:20:21.080382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:06.665 [2024-12-16 13:20:21.080389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:16:06.665 [2024-12-16 13:20:21.080395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.665 [2024-12-16 13:20:21.080417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.665 [2024-12-16 13:20:21.080424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:06.665 [2024-12-16 13:20:21.080432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.665 [2024-12-16 13:20:21.080439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.665 [2024-12-16 13:20:21.080462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:06.665 [2024-12-16 13:20:21.080469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.665 [2024-12-16 13:20:21.080474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:06.665 [2024-12-16 13:20:21.080480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:06.665 [2024-12-16 13:20:21.080486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.665 [2024-12-16 13:20:21.098489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.665 [2024-12-16 13:20:21.098588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:06.665 [2024-12-16 13:20:21.098600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.989 ms 00:16:06.665 [2024-12-16 13:20:21.098607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.665 [2024-12-16 13:20:21.098680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.665 [2024-12-16 13:20:21.098688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:06.665 [2024-12-16 13:20:21.098695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:06.665 [2024-12-16 13:20:21.098700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.665 [2024-12-16 13:20:21.099354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:06.665 [2024-12-16 13:20:21.101773] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 216.382 ms, result 0 00:16:06.665 [2024-12-16 13:20:21.102367] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:06.665 [2024-12-16 13:20:21.117317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:07.651  [2024-12-16T13:20:23.182Z] Copying: 18/256 [MB] (18 MBps) [2024-12-16T13:20:24.124Z] Copying: 46/256 [MB] (27 MBps) [2024-12-16T13:20:25.507Z] Copying: 64/256 [MB] (18 MBps) [2024-12-16T13:20:26.448Z] Copying: 80/256 [MB] (15 MBps) [2024-12-16T13:20:27.387Z] Copying: 93/256 [MB] (12 MBps) [2024-12-16T13:20:28.328Z] Copying: 109/256 [MB] (16 MBps) [2024-12-16T13:20:29.267Z] Copying: 134/256 [MB] (25 MBps) [2024-12-16T13:20:30.210Z] Copying: 170/256 [MB] (36 MBps) [2024-12-16T13:20:31.154Z] Copying: 195/256 [MB] (24 MBps) [2024-12-16T13:20:32.541Z] Copying: 218/256 [MB] (23 MBps) [2024-12-16T13:20:33.486Z] Copying: 237/256 [MB] (19 MBps) [2024-12-16T13:20:33.748Z] Copying: 248/256 [MB] (10 MBps) [2024-12-16T13:20:33.748Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-16 13:20:33.601895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:19.174 [2024-12-16 13:20:33.609805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.609835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:19.174 [2024-12-16 13:20:33.609854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:19.174 [2024-12-16 13:20:33.609860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.609880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:19.174 [2024-12-16 13:20:33.612161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.612181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:19.174 [2024-12-16 13:20:33.612190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:16:19.174 [2024-12-16 13:20:33.612196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.614764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.614787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:19.174 [2024-12-16 13:20:33.614796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:16:19.174 [2024-12-16 13:20:33.614801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.620765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.620789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:19.174 [2024-12-16 13:20:33.620798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.947 ms 00:16:19.174 [2024-12-16 13:20:33.620805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.626099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.626120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:19.174 [2024-12-16 13:20:33.626128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.248 ms 00:16:19.174 [2024-12-16 13:20:33.626135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.644885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.644908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:19.174 [2024-12-16 13:20:33.644916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.710 ms 00:16:19.174 [2024-12-16 13:20:33.644922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.657470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.657497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:19.174 [2024-12-16 13:20:33.657505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:16:19.174 [2024-12-16 13:20:33.657512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.657618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.657634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:19.174 [2024-12-16 13:20:33.657643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:19.174 [2024-12-16 13:20:33.657648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.676393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.676413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:19.174 [2024-12-16 13:20:33.676421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.731 ms 00:16:19.174 [2024-12-16 13:20:33.676427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.694892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.694914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:19.174 [2024-12-16 13:20:33.694922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.430 ms 00:16:19.174 [2024-12-16 13:20:33.694927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.712985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.713014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:19.174 [2024-12-16 13:20:33.713021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.023 ms 00:16:19.174 [2024-12-16 13:20:33.713027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.730662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.174 [2024-12-16 13:20:33.730683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:19.174 [2024-12-16 13:20:33.730690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.577 ms 00:16:19.174 [2024-12-16 13:20:33.730696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.174 [2024-12-16 13:20:33.730730] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:19.174 [2024-12-16 13:20:33.730742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:19.174 [2024-12-16 13:20:33.730750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:19.174 [2024-12-16 13:20:33.730756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:19.174 [2024-12-16 13:20:33.730762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:19.174 [2024-12-16 13:20:33.730768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:19.174 [2024-12-16 13:20:33.730773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:19.174 [2024-12-16 13:20:33.730779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.730999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:19.175 [2024-12-16 13:20:33.731314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:19.176 [2024-12-16 13:20:33.731319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:19.176 [2024-12-16 13:20:33.731332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:19.176 [2024-12-16 13:20:33.731338] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:16:19.176 [2024-12-16 13:20:33.731344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:19.176 [2024-12-16 13:20:33.731350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:19.176 [2024-12-16 13:20:33.731355] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:19.176 [2024-12-16 13:20:33.731361] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:19.176 [2024-12-16 13:20:33.731366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:19.176 [2024-12-16 13:20:33.731372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:19.176 [2024-12-16 13:20:33.731379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:19.176 [2024-12-16 13:20:33.731384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:19.176 [2024-12-16 13:20:33.731389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:19.176 [2024-12-16 13:20:33.731394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.176 [2024-12-16 13:20:33.731400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:19.176 [2024-12-16 13:20:33.731407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:16:19.176 [2024-12-16 13:20:33.731412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.176 [2024-12-16 13:20:33.741673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.176 [2024-12-16 13:20:33.741693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:19.176 [2024-12-16 13:20:33.741701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.247 ms 00:16:19.176 [2024-12-16 13:20:33.741710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.176 [2024-12-16 13:20:33.741885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.176 [2024-12-16 13:20:33.741893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:19.176 [2024-12-16 13:20:33.741900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:19.176 [2024-12-16 13:20:33.741905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.773220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.773244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:19.437 [2024-12-16 13:20:33.773251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.773261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.773327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.773334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:19.437 [2024-12-16 13:20:33.773341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.773346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.773379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.773386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:19.437 [2024-12-16 13:20:33.773393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.773398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.773416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.773422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:19.437 [2024-12-16 13:20:33.773427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.773433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.832973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.833002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.437 [2024-12-16 13:20:33.833010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.833019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.437 [2024-12-16 13:20:33.856623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.856642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.437 [2024-12-16 13:20:33.856700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.856706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.437 [2024-12-16 13:20:33.856746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.856752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.437 [2024-12-16 13:20:33.856840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.856846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:19.437 [2024-12-16 13:20:33.856887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.856893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.437 [2024-12-16 13:20:33.856942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.856948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.856989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.437 [2024-12-16 13:20:33.856998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.437 [2024-12-16 13:20:33.857007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.437 [2024-12-16 13:20:33.857013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.437 [2024-12-16 13:20:33.857139] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 247.333 ms, result 0 00:16:20.009 00:16:20.009 00:16:20.009 13:20:34 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:20.009 13:20:34 -- ftl/trim.sh@72 -- # svcpid=72051 00:16:20.009 13:20:34 -- ftl/trim.sh@73 -- # waitforlisten 72051 00:16:20.009 13:20:34 -- common/autotest_common.sh@829 -- # '[' -z 72051 ']' 00:16:20.009 13:20:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.010 13:20:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.010 13:20:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.010 13:20:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.010 13:20:34 -- common/autotest_common.sh@10 -- # set +x 00:16:20.270 [2024-12-16 13:20:34.631722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.270 [2024-12-16 13:20:34.631836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72051 ] 00:16:20.270 [2024-12-16 13:20:34.782498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.531 [2024-12-16 13:20:34.969597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:20.531 [2024-12-16 13:20:34.969788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.913 13:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.913 13:20:36 -- common/autotest_common.sh@862 -- # return 0 00:16:21.913 13:20:36 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:21.913 [2024-12-16 13:20:36.327726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:21.913 [2024-12-16 13:20:36.327774] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:22.175 [2024-12-16 13:20:36.492686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.492718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:22.175 [2024-12-16 13:20:36.492732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:22.175 [2024-12-16 13:20:36.492739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.494920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.494945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.175 [2024-12-16 13:20:36.494954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:16:22.175 [2024-12-16 13:20:36.494960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.495022] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:22.175 [2024-12-16 13:20:36.495583] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:22.175 [2024-12-16 13:20:36.495602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.495609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.175 [2024-12-16 13:20:36.495618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:16:22.175 [2024-12-16 13:20:36.495624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.497362] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:22.175 [2024-12-16 13:20:36.508029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.508058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:22.175 [2024-12-16 13:20:36.508069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.672 ms 00:16:22.175 [2024-12-16 13:20:36.508077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.508141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.508151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:22.175 [2024-12-16 13:20:36.508159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:22.175 [2024-12-16 13:20:36.508166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.514361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.514386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.175 [2024-12-16 13:20:36.514394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.155 ms 00:16:22.175 [2024-12-16 13:20:36.514401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.514474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.514484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.175 [2024-12-16 13:20:36.514490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:22.175 [2024-12-16 13:20:36.514498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.514520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.514528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:22.175 [2024-12-16 13:20:36.514535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:22.175 [2024-12-16 13:20:36.514543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.175 [2024-12-16 13:20:36.514567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:22.175 [2024-12-16 13:20:36.517753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.175 [2024-12-16 13:20:36.517780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.176 [2024-12-16 13:20:36.517789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:16:22.176 [2024-12-16 13:20:36.517795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.176 [2024-12-16 13:20:36.517829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.176 [2024-12-16 13:20:36.517835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:22.176 [2024-12-16 13:20:36.517843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:22.176 [2024-12-16 13:20:36.517851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.176 [2024-12-16 13:20:36.517869] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:22.176 [2024-12-16 13:20:36.517885] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:22.176 [2024-12-16 13:20:36.517914] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:22.176 [2024-12-16 13:20:36.517927] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:22.176 [2024-12-16 13:20:36.517988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:22.176 [2024-12-16 13:20:36.517996] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:22.176 [2024-12-16 13:20:36.518008] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:22.176 [2024-12-16 13:20:36.518016] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518024] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518030] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:22.176 [2024-12-16 13:20:36.518037] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:22.176 [2024-12-16 13:20:36.518043] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:22.176 [2024-12-16 13:20:36.518052] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:22.176 [2024-12-16 13:20:36.518058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.176 [2024-12-16 13:20:36.518065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:22.176 [2024-12-16 13:20:36.518071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:22.176 [2024-12-16 13:20:36.518079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.176 [2024-12-16 13:20:36.518130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.176 [2024-12-16 13:20:36.518137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:22.176 [2024-12-16 13:20:36.518143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:22.176 [2024-12-16 13:20:36.518150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.176 [2024-12-16 13:20:36.518215] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:22.176 [2024-12-16 13:20:36.518230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:22.176 [2024-12-16 13:20:36.518236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:22.176 [2024-12-16 13:20:36.518261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:22.176 [2024-12-16 13:20:36.518282] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:22.176 [2024-12-16 13:20:36.518294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:22.176 [2024-12-16 13:20:36.518301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:22.176 [2024-12-16 13:20:36.518306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:22.176 [2024-12-16 13:20:36.518312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:22.176 [2024-12-16 13:20:36.518317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:22.176 [2024-12-16 13:20:36.518323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:22.176 [2024-12-16 13:20:36.518334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:22.176 [2024-12-16 13:20:36.518339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:22.176 [2024-12-16 13:20:36.518350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:22.176 [2024-12-16 13:20:36.518357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:22.176 [2024-12-16 13:20:36.518370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:22.176 [2024-12-16 13:20:36.518391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:22.176 [2024-12-16 13:20:36.518409] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:22.176 [2024-12-16 13:20:36.518426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:22.176 [2024-12-16 13:20:36.518444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:22.176 [2024-12-16 13:20:36.518456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:22.176 [2024-12-16 13:20:36.518462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:22.176 [2024-12-16 13:20:36.518470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:22.176 [2024-12-16 13:20:36.518476] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:22.176 [2024-12-16 13:20:36.518485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:22.176 [2024-12-16 13:20:36.518491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.176 [2024-12-16 13:20:36.518503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:22.176 [2024-12-16 13:20:36.518510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:22.176 [2024-12-16 13:20:36.518515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:22.176 [2024-12-16 13:20:36.518522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:22.176 [2024-12-16 13:20:36.518527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:22.176 [2024-12-16 13:20:36.518534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:22.176 [2024-12-16 13:20:36.518540] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:22.176 [2024-12-16 13:20:36.518548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:22.176 [2024-12-16 13:20:36.518555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:22.176 [2024-12-16 13:20:36.518562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:22.176 [2024-12-16 13:20:36.518568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:22.176 [2024-12-16 13:20:36.518578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:22.176 [2024-12-16 13:20:36.518583] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:22.176 [2024-12-16 13:20:36.518590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:22.176 [2024-12-16 13:20:36.518595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:22.176 [2024-12-16 13:20:36.518602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:22.176 [2024-12-16 13:20:36.518608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:22.176 [2024-12-16 13:20:36.518616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:22.176 [2024-12-16 13:20:36.518621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:22.176 [2024-12-16 13:20:36.518638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:22.176 [2024-12-16 13:20:36.518645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:22.176 [2024-12-16 13:20:36.518651] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:22.176 [2024-12-16 13:20:36.518658] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:22.176 [2024-12-16 13:20:36.518666] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:22.176 [2024-12-16 13:20:36.518672] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:22.176 [2024-12-16 13:20:36.518680] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:22.176 [2024-12-16 13:20:36.518686] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:22.176 [2024-12-16 13:20:36.518695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.176 [2024-12-16 13:20:36.518703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:22.176 [2024-12-16 13:20:36.518710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:16:22.177 [2024-12-16 13:20:36.518716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.532600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.532648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.177 [2024-12-16 13:20:36.532662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.834 ms 00:16:22.177 [2024-12-16 13:20:36.532670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.532762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.532771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:22.177 [2024-12-16 13:20:36.532779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:22.177 [2024-12-16 13:20:36.532786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.559646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.559668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.177 [2024-12-16 13:20:36.559678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:16:22.177 [2024-12-16 13:20:36.559685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.559733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.559742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.177 [2024-12-16 13:20:36.559750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:22.177 [2024-12-16 13:20:36.559756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.560136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.560155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.177 [2024-12-16 13:20:36.560165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:16:22.177 [2024-12-16 13:20:36.560172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.560273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.560279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.177 [2024-12-16 13:20:36.560289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:22.177 [2024-12-16 13:20:36.560295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.574077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.574097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.177 [2024-12-16 13:20:36.574107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.765 ms 00:16:22.177 [2024-12-16 13:20:36.574113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.584929] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:22.177 [2024-12-16 13:20:36.584952] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:22.177 [2024-12-16 13:20:36.584962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.584969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:22.177 [2024-12-16 13:20:36.584978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.769 ms 00:16:22.177 [2024-12-16 13:20:36.584984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.603697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.603720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:22.177 [2024-12-16 13:20:36.603730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.656 ms 00:16:22.177 [2024-12-16 13:20:36.603737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.613351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.613377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:22.177 [2024-12-16 13:20:36.613386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.558 ms 00:16:22.177 [2024-12-16 13:20:36.613391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.622556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.622585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:22.177 [2024-12-16 13:20:36.622598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.121 ms 00:16:22.177 [2024-12-16 13:20:36.622604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.622888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.622898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:22.177 [2024-12-16 13:20:36.622909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:22.177 [2024-12-16 13:20:36.622914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.671719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.671747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:22.177 [2024-12-16 13:20:36.671760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.786 ms 00:16:22.177 [2024-12-16 13:20:36.671767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.679785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:22.177 [2024-12-16 13:20:36.694398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.694425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:22.177 [2024-12-16 13:20:36.694434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.566 ms 00:16:22.177 [2024-12-16 13:20:36.694442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.694501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.694512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:22.177 [2024-12-16 13:20:36.694519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:22.177 [2024-12-16 13:20:36.694529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.694573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.694581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:22.177 [2024-12-16 13:20:36.694588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:22.177 [2024-12-16 13:20:36.694595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.695619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.695660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:22.177 [2024-12-16 13:20:36.695667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:16:22.177 [2024-12-16 13:20:36.695674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.695704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.695714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:22.177 [2024-12-16 13:20:36.695720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:22.177 [2024-12-16 13:20:36.695727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.695760] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:22.177 [2024-12-16 13:20:36.695771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.695777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:22.177 [2024-12-16 13:20:36.695784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:22.177 [2024-12-16 13:20:36.695791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.715115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.715138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:22.177 [2024-12-16 13:20:36.715148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.303 ms 00:16:22.177 [2024-12-16 13:20:36.715155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.715228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.177 [2024-12-16 13:20:36.715236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:22.177 [2024-12-16 13:20:36.715244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:22.177 [2024-12-16 13:20:36.715252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.177 [2024-12-16 13:20:36.716301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:22.177 [2024-12-16 13:20:36.718814] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.365 ms, result 0 00:16:22.177 [2024-12-16 13:20:36.720610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:22.177 Some configs were skipped because the RPC state that can call them passed over. 00:16:22.438 13:20:36 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:22.438 [2024-12-16 13:20:36.958418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.438 [2024-12-16 13:20:36.958452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:22.438 [2024-12-16 13:20:36.958461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.060 ms 00:16:22.438 [2024-12-16 13:20:36.958469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.438 [2024-12-16 13:20:36.958497] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.138 ms, result 0 00:16:22.438 true 00:16:22.438 13:20:36 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:22.699 [2024-12-16 13:20:37.165428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.699 [2024-12-16 13:20:37.165456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:22.699 [2024-12-16 13:20:37.165466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.751 ms 00:16:22.699 [2024-12-16 13:20:37.165472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.699 [2024-12-16 13:20:37.165502] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.825 ms, result 0 00:16:22.699 true 00:16:22.699 13:20:37 -- ftl/trim.sh@81 -- # killprocess 72051 00:16:22.699 13:20:37 -- common/autotest_common.sh@936 -- # '[' -z 72051 ']' 00:16:22.699 13:20:37 -- common/autotest_common.sh@940 -- # kill -0 72051 00:16:22.699 13:20:37 -- common/autotest_common.sh@941 -- # uname 00:16:22.699 13:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.699 13:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72051 00:16:22.699 killing process with pid 72051 00:16:22.699 13:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.699 13:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.699 13:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72051' 00:16:22.699 13:20:37 -- common/autotest_common.sh@955 -- # kill 72051 00:16:22.699 13:20:37 -- common/autotest_common.sh@960 -- # wait 72051 00:16:23.272 [2024-12-16 13:20:37.781959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.782010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:23.272 [2024-12-16 13:20:37.782021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:23.272 [2024-12-16 13:20:37.782031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.782049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:23.272 [2024-12-16 13:20:37.784063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.784088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:23.272 [2024-12-16 13:20:37.784101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.999 ms 00:16:23.272 [2024-12-16 13:20:37.784107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.784344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.784353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:23.272 [2024-12-16 13:20:37.784361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:23.272 [2024-12-16 13:20:37.784367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.787990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.788018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:23.272 [2024-12-16 13:20:37.788026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:16:23.272 [2024-12-16 13:20:37.788033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.793352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.793385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:23.272 [2024-12-16 13:20:37.793395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.290 ms 00:16:23.272 [2024-12-16 13:20:37.793403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.801890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.801914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:23.272 [2024-12-16 13:20:37.801925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.427 ms 00:16:23.272 [2024-12-16 13:20:37.801931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.809204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.809232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:23.272 [2024-12-16 13:20:37.809242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.239 ms 00:16:23.272 [2024-12-16 13:20:37.809248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.809361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.809368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:23.272 [2024-12-16 13:20:37.809378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:23.272 [2024-12-16 13:20:37.809384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.818070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.818095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:23.272 [2024-12-16 13:20:37.818103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.668 ms 00:16:23.272 [2024-12-16 13:20:37.818109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.826285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.826310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:23.272 [2024-12-16 13:20:37.826322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.145 ms 00:16:23.272 [2024-12-16 13:20:37.826327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.833986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.834010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:23.272 [2024-12-16 13:20:37.834019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.621 ms 00:16:23.272 [2024-12-16 13:20:37.834024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.841945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.272 [2024-12-16 13:20:37.841969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:23.272 [2024-12-16 13:20:37.841977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.868 ms 00:16:23.272 [2024-12-16 13:20:37.841983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.272 [2024-12-16 13:20:37.842011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:23.273 [2024-12-16 13:20:37.842024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:23.273 [2024-12-16 13:20:37.842622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:23.274 [2024-12-16 13:20:37.842710] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:23.274 [2024-12-16 13:20:37.842720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:16:23.274 [2024-12-16 13:20:37.842726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:23.274 [2024-12-16 13:20:37.842734] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:23.274 [2024-12-16 13:20:37.842739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:23.274 [2024-12-16 13:20:37.842747] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:23.274 [2024-12-16 13:20:37.842752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:23.274 [2024-12-16 13:20:37.842760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:23.274 [2024-12-16 13:20:37.842765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:23.274 [2024-12-16 13:20:37.842771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:23.274 [2024-12-16 13:20:37.842777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:23.274 [2024-12-16 13:20:37.842783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.274 [2024-12-16 13:20:37.842789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:23.274 [2024-12-16 13:20:37.842797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:16:23.274 [2024-12-16 13:20:37.842804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.852937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.535 [2024-12-16 13:20:37.852962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:23.535 [2024-12-16 13:20:37.852973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.115 ms 00:16:23.535 [2024-12-16 13:20:37.852978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.853158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.535 [2024-12-16 13:20:37.853167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:23.535 [2024-12-16 13:20:37.853177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:23.535 [2024-12-16 13:20:37.853182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.890293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.890319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:23.535 [2024-12-16 13:20:37.890329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.890336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.890405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.890412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:23.535 [2024-12-16 13:20:37.890423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.890428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.890468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.890477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:23.535 [2024-12-16 13:20:37.890487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.890494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.890511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.890517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:23.535 [2024-12-16 13:20:37.890524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.890531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.954064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.954096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:23.535 [2024-12-16 13:20:37.954107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.954115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.977971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.978003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:23.535 [2024-12-16 13:20:37.978017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.978024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.978077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.978085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:23.535 [2024-12-16 13:20:37.978094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.978100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.978129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.978136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:23.535 [2024-12-16 13:20:37.978144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.978150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.978229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.978238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:23.535 [2024-12-16 13:20:37.978246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.978252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.978281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.978288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:23.535 [2024-12-16 13:20:37.978295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.978302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.535 [2024-12-16 13:20:37.978341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.535 [2024-12-16 13:20:37.978348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:23.535 [2024-12-16 13:20:37.978358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.535 [2024-12-16 13:20:37.978364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.536 [2024-12-16 13:20:37.978408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.536 [2024-12-16 13:20:37.978417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:23.536 [2024-12-16 13:20:37.978425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.536 [2024-12-16 13:20:37.978431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.536 [2024-12-16 13:20:37.978557] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 196.575 ms, result 0 00:16:24.107 13:20:38 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:24.107 13:20:38 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.368 [2024-12-16 13:20:38.722080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:24.368 [2024-12-16 13:20:38.722184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72108 ] 00:16:24.368 [2024-12-16 13:20:38.867884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.629 [2024-12-16 13:20:39.050898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.891 [2024-12-16 13:20:39.276902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.891 [2024-12-16 13:20:39.276958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.891 [2024-12-16 13:20:39.423719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.423766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.891 [2024-12-16 13:20:39.423779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.891 [2024-12-16 13:20:39.423786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.426480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.426520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.891 [2024-12-16 13:20:39.426530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:16:24.891 [2024-12-16 13:20:39.426537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.426608] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.891 [2024-12-16 13:20:39.427338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.891 [2024-12-16 13:20:39.427362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.427370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.891 [2024-12-16 13:20:39.427379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:16:24.891 [2024-12-16 13:20:39.427386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.428823] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:24.891 [2024-12-16 13:20:39.442039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.442073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:24.891 [2024-12-16 13:20:39.442085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.217 ms 00:16:24.891 [2024-12-16 13:20:39.442092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.442175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.442186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:24.891 [2024-12-16 13:20:39.442194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:24.891 [2024-12-16 13:20:39.442201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.448835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.448864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.891 [2024-12-16 13:20:39.448873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.591 ms 00:16:24.891 [2024-12-16 13:20:39.448884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.448978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.448987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.891 [2024-12-16 13:20:39.448996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:24.891 [2024-12-16 13:20:39.449003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.449027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.449036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.891 [2024-12-16 13:20:39.449044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.891 [2024-12-16 13:20:39.449051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.449078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.891 [2024-12-16 13:20:39.452974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.453002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.891 [2024-12-16 13:20:39.453011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.908 ms 00:16:24.891 [2024-12-16 13:20:39.453021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.453081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.453091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.891 [2024-12-16 13:20:39.453099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:24.891 [2024-12-16 13:20:39.453106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.453124] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:24.891 [2024-12-16 13:20:39.453144] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:24.891 [2024-12-16 13:20:39.453178] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:24.891 [2024-12-16 13:20:39.453196] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:24.891 [2024-12-16 13:20:39.453273] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:24.891 [2024-12-16 13:20:39.453283] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.891 [2024-12-16 13:20:39.453293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:24.891 [2024-12-16 13:20:39.453303] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.891 [2024-12-16 13:20:39.453311] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.891 [2024-12-16 13:20:39.453319] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.891 [2024-12-16 13:20:39.453326] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.891 [2024-12-16 13:20:39.453334] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:24.891 [2024-12-16 13:20:39.453344] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:24.891 [2024-12-16 13:20:39.453351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.453359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.891 [2024-12-16 13:20:39.453367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:16:24.891 [2024-12-16 13:20:39.453374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.453439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.891 [2024-12-16 13:20:39.453448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.891 [2024-12-16 13:20:39.453455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:24.891 [2024-12-16 13:20:39.453464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.891 [2024-12-16 13:20:39.453540] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.891 [2024-12-16 13:20:39.453557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.891 [2024-12-16 13:20:39.453565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.891 [2024-12-16 13:20:39.453574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.891 [2024-12-16 13:20:39.453582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.891 [2024-12-16 13:20:39.453589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.892 [2024-12-16 13:20:39.453610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.892 [2024-12-16 13:20:39.453624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.892 [2024-12-16 13:20:39.453642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.892 [2024-12-16 13:20:39.453650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.892 [2024-12-16 13:20:39.453657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.892 [2024-12-16 13:20:39.453670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:24.892 [2024-12-16 13:20:39.453677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.892 [2024-12-16 13:20:39.453691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:24.892 [2024-12-16 13:20:39.453697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:24.892 [2024-12-16 13:20:39.453711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:24.892 [2024-12-16 13:20:39.453717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.892 [2024-12-16 13:20:39.453731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.892 [2024-12-16 13:20:39.453751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.892 [2024-12-16 13:20:39.453769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.892 [2024-12-16 13:20:39.453789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.892 [2024-12-16 13:20:39.453809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.892 [2024-12-16 13:20:39.453822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.892 [2024-12-16 13:20:39.453828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:24.892 [2024-12-16 13:20:39.453835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.892 [2024-12-16 13:20:39.453841] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.892 [2024-12-16 13:20:39.453849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.892 [2024-12-16 13:20:39.453856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.892 [2024-12-16 13:20:39.453874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.892 [2024-12-16 13:20:39.453881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.892 [2024-12-16 13:20:39.453888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.892 [2024-12-16 13:20:39.453895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.892 [2024-12-16 13:20:39.453902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.892 [2024-12-16 13:20:39.453908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.892 [2024-12-16 13:20:39.453916] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.892 [2024-12-16 13:20:39.453925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.892 [2024-12-16 13:20:39.453932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.892 [2024-12-16 13:20:39.453939] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:24.892 [2024-12-16 13:20:39.453946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.892 [2024-12-16 13:20:39.453953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:24.892 [2024-12-16 13:20:39.453961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:24.892 [2024-12-16 13:20:39.453968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:24.892 [2024-12-16 13:20:39.453974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:24.892 [2024-12-16 13:20:39.453981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:24.892 [2024-12-16 13:20:39.453988] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:24.892 [2024-12-16 13:20:39.453994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:24.892 [2024-12-16 13:20:39.454001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:24.892 [2024-12-16 13:20:39.454008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:24.892 [2024-12-16 13:20:39.454016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:24.892 [2024-12-16 13:20:39.454022] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.892 [2024-12-16 13:20:39.454035] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.892 [2024-12-16 13:20:39.454043] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.892 [2024-12-16 13:20:39.454050] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.892 [2024-12-16 13:20:39.454057] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.892 [2024-12-16 13:20:39.454064] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.892 [2024-12-16 13:20:39.454072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.892 [2024-12-16 13:20:39.454079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.892 [2024-12-16 13:20:39.454086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:16:24.892 [2024-12-16 13:20:39.454093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.470621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.470674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.154 [2024-12-16 13:20:39.470687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.485 ms 00:16:25.154 [2024-12-16 13:20:39.470695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.470810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.470820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:25.154 [2024-12-16 13:20:39.470829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:25.154 [2024-12-16 13:20:39.470838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.519449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.519490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.154 [2024-12-16 13:20:39.519502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.589 ms 00:16:25.154 [2024-12-16 13:20:39.519510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.519582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.519592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.154 [2024-12-16 13:20:39.519604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:25.154 [2024-12-16 13:20:39.519612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.520055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.520084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.154 [2024-12-16 13:20:39.520093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:16:25.154 [2024-12-16 13:20:39.520101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.520228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.520238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.154 [2024-12-16 13:20:39.520248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:25.154 [2024-12-16 13:20:39.520255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.536242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.536273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.154 [2024-12-16 13:20:39.536283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.964 ms 00:16:25.154 [2024-12-16 13:20:39.536293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.549880] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:25.154 [2024-12-16 13:20:39.549916] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:25.154 [2024-12-16 13:20:39.549926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.549935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:25.154 [2024-12-16 13:20:39.549943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.538 ms 00:16:25.154 [2024-12-16 13:20:39.549951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.574647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.574685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:25.154 [2024-12-16 13:20:39.574696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.627 ms 00:16:25.154 [2024-12-16 13:20:39.574703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.586635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.586667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:25.154 [2024-12-16 13:20:39.586684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.862 ms 00:16:25.154 [2024-12-16 13:20:39.586692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.598372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.154 [2024-12-16 13:20:39.598401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:25.154 [2024-12-16 13:20:39.598412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.618 ms 00:16:25.154 [2024-12-16 13:20:39.598419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.154 [2024-12-16 13:20:39.598806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.598821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:25.155 [2024-12-16 13:20:39.598831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:16:25.155 [2024-12-16 13:20:39.598842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.662089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.662131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:25.155 [2024-12-16 13:20:39.662143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.223 ms 00:16:25.155 [2024-12-16 13:20:39.662155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.672599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:25.155 [2024-12-16 13:20:39.690294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.690333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:25.155 [2024-12-16 13:20:39.690345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.063 ms 00:16:25.155 [2024-12-16 13:20:39.690353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.690425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.690435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:25.155 [2024-12-16 13:20:39.690448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:25.155 [2024-12-16 13:20:39.690456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.690511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.690520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:25.155 [2024-12-16 13:20:39.690528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:25.155 [2024-12-16 13:20:39.690534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.691854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.691886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:25.155 [2024-12-16 13:20:39.691896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:16:25.155 [2024-12-16 13:20:39.691904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.691938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.691949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:25.155 [2024-12-16 13:20:39.691957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:25.155 [2024-12-16 13:20:39.691965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.692001] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:25.155 [2024-12-16 13:20:39.692010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.692018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:25.155 [2024-12-16 13:20:39.692026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:25.155 [2024-12-16 13:20:39.692034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.716394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.716436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:25.155 [2024-12-16 13:20:39.716447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.335 ms 00:16:25.155 [2024-12-16 13:20:39.716455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.716545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.155 [2024-12-16 13:20:39.716556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:25.155 [2024-12-16 13:20:39.716566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:25.155 [2024-12-16 13:20:39.716574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.155 [2024-12-16 13:20:39.717519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.155 [2024-12-16 13:20:39.720783] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.483 ms, result 0 00:16:25.155 [2024-12-16 13:20:39.722042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.416 [2024-12-16 13:20:39.735529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:26.362  [2024-12-16T13:20:41.878Z] Copying: 14/256 [MB] (14 MBps) [2024-12-16T13:20:42.823Z] Copying: 29/256 [MB] (15 MBps) [2024-12-16T13:20:43.767Z] Copying: 47/256 [MB] (18 MBps) [2024-12-16T13:20:45.184Z] Copying: 67/256 [MB] (20 MBps) [2024-12-16T13:20:45.755Z] Copying: 80/256 [MB] (13 MBps) [2024-12-16T13:20:47.142Z] Copying: 96/256 [MB] (15 MBps) [2024-12-16T13:20:48.085Z] Copying: 116/256 [MB] (20 MBps) [2024-12-16T13:20:49.030Z] Copying: 132/256 [MB] (15 MBps) [2024-12-16T13:20:49.974Z] Copying: 142/256 [MB] (10 MBps) [2024-12-16T13:20:50.916Z] Copying: 158/256 [MB] (15 MBps) [2024-12-16T13:20:51.862Z] Copying: 180/256 [MB] (21 MBps) [2024-12-16T13:20:52.806Z] Copying: 196/256 [MB] (15 MBps) [2024-12-16T13:20:53.749Z] Copying: 217/256 [MB] (21 MBps) [2024-12-16T13:20:54.694Z] Copying: 240/256 [MB] (22 MBps) [2024-12-16T13:20:54.695Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-16 13:20:54.608742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:40.121 [2024-12-16 13:20:54.619841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.619906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.121 [2024-12-16 13:20:54.619923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:40.121 [2024-12-16 13:20:54.619932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.619958] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:40.121 [2024-12-16 13:20:54.623349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.623392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.121 [2024-12-16 13:20:54.623403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.374 ms 00:16:40.121 [2024-12-16 13:20:54.623414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.623716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.623730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.121 [2024-12-16 13:20:54.623740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:16:40.121 [2024-12-16 13:20:54.623753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.627502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.627532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.121 [2024-12-16 13:20:54.627543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:16:40.121 [2024-12-16 13:20:54.627551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.634431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.634470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:40.121 [2024-12-16 13:20:54.634483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:16:40.121 [2024-12-16 13:20:54.634492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.661085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.661136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.121 [2024-12-16 13:20:54.661149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.517 ms 00:16:40.121 [2024-12-16 13:20:54.661157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.678815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.678870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.121 [2024-12-16 13:20:54.678883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.591 ms 00:16:40.121 [2024-12-16 13:20:54.678891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.121 [2024-12-16 13:20:54.679072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.121 [2024-12-16 13:20:54.679086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.121 [2024-12-16 13:20:54.679097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:40.121 [2024-12-16 13:20:54.679105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.384 [2024-12-16 13:20:54.705829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.384 [2024-12-16 13:20:54.705874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:40.384 [2024-12-16 13:20:54.705887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.705 ms 00:16:40.384 [2024-12-16 13:20:54.705894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.384 [2024-12-16 13:20:54.731566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.384 [2024-12-16 13:20:54.731615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:40.384 [2024-12-16 13:20:54.731639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.593 ms 00:16:40.384 [2024-12-16 13:20:54.731647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.384 [2024-12-16 13:20:54.756644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.384 [2024-12-16 13:20:54.756693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.384 [2024-12-16 13:20:54.756706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.932 ms 00:16:40.384 [2024-12-16 13:20:54.756714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.384 [2024-12-16 13:20:54.781754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.384 [2024-12-16 13:20:54.781803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.384 [2024-12-16 13:20:54.781816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.930 ms 00:16:40.384 [2024-12-16 13:20:54.781824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.384 [2024-12-16 13:20:54.781887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.384 [2024-12-16 13:20:54.781906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.781999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.384 [2024-12-16 13:20:54.782287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.385 [2024-12-16 13:20:54.782728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.385 [2024-12-16 13:20:54.782737] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:16:40.385 [2024-12-16 13:20:54.782746] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.385 [2024-12-16 13:20:54.782756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.385 [2024-12-16 13:20:54.782764] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.385 [2024-12-16 13:20:54.782772] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.385 [2024-12-16 13:20:54.782780] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.385 [2024-12-16 13:20:54.782792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.385 [2024-12-16 13:20:54.782799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.385 [2024-12-16 13:20:54.782806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.385 [2024-12-16 13:20:54.782814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.385 [2024-12-16 13:20:54.782822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.385 [2024-12-16 13:20:54.782830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.385 [2024-12-16 13:20:54.782838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:16:40.385 [2024-12-16 13:20:54.782846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.797488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.385 [2024-12-16 13:20:54.797533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.385 [2024-12-16 13:20:54.797552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.608 ms 00:16:40.385 [2024-12-16 13:20:54.797561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.797856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.385 [2024-12-16 13:20:54.797871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.385 [2024-12-16 13:20:54.797880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:16:40.385 [2024-12-16 13:20:54.797890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.842653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.385 [2024-12-16 13:20:54.842703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.385 [2024-12-16 13:20:54.842722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.385 [2024-12-16 13:20:54.842731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.842830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.385 [2024-12-16 13:20:54.842840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.385 [2024-12-16 13:20:54.842849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.385 [2024-12-16 13:20:54.842862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.842925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.385 [2024-12-16 13:20:54.842936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.385 [2024-12-16 13:20:54.842945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.385 [2024-12-16 13:20:54.842958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.842977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.385 [2024-12-16 13:20:54.842986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.385 [2024-12-16 13:20:54.842994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.385 [2024-12-16 13:20:54.843002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.385 [2024-12-16 13:20:54.929006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.385 [2024-12-16 13:20:54.929064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.385 [2024-12-16 13:20:54.929083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.385 [2024-12-16 13:20:54.929091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.648 [2024-12-16 13:20:54.962887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.648 [2024-12-16 13:20:54.962944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.648 [2024-12-16 13:20:54.962956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.648 [2024-12-16 13:20:54.962965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.648 [2024-12-16 13:20:54.963031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.648 [2024-12-16 13:20:54.963042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.648 [2024-12-16 13:20:54.963052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.648 [2024-12-16 13:20:54.963061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.648 [2024-12-16 13:20:54.963103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.648 [2024-12-16 13:20:54.963112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.648 [2024-12-16 13:20:54.963120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.648 [2024-12-16 13:20:54.963131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.648 [2024-12-16 13:20:54.963248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.648 [2024-12-16 13:20:54.963260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.648 [2024-12-16 13:20:54.963270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.648 [2024-12-16 13:20:54.963278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.648 [2024-12-16 13:20:54.963317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.648 [2024-12-16 13:20:54.963328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.648 [2024-12-16 13:20:54.963337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.648 [2024-12-16 13:20:54.963347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.648 [2024-12-16 13:20:54.963397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.648 [2024-12-16 13:20:54.963407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.648 [2024-12-16 13:20:54.963415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.648 [2024-12-16 13:20:54.963423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.649 [2024-12-16 13:20:54.963492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.649 [2024-12-16 13:20:54.963510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.649 [2024-12-16 13:20:54.963518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.649 [2024-12-16 13:20:54.963526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.649 [2024-12-16 13:20:54.963750] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.904 ms, result 0 00:16:41.219 00:16:41.219 00:16:41.219 13:20:55 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:41.219 13:20:55 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:41.791 13:20:56 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:42.052 [2024-12-16 13:20:56.380314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:42.052 [2024-12-16 13:20:56.380476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72295 ] 00:16:42.052 [2024-12-16 13:20:56.534641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.313 [2024-12-16 13:20:56.703084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.575 [2024-12-16 13:20:56.929884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.575 [2024-12-16 13:20:56.929941] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.575 [2024-12-16 13:20:57.075091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.075131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.575 [2024-12-16 13:20:57.075142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.575 [2024-12-16 13:20:57.075148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.077371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.077403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.575 [2024-12-16 13:20:57.077414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:16:42.575 [2024-12-16 13:20:57.077420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.077476] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.575 [2024-12-16 13:20:57.078053] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.575 [2024-12-16 13:20:57.078070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.078077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.575 [2024-12-16 13:20:57.078084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:16:42.575 [2024-12-16 13:20:57.078091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.079386] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:42.575 [2024-12-16 13:20:57.089554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.089584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:42.575 [2024-12-16 13:20:57.089593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.170 ms 00:16:42.575 [2024-12-16 13:20:57.089600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.089677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.089686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:42.575 [2024-12-16 13:20:57.089693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:42.575 [2024-12-16 13:20:57.089699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.095933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.095957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.575 [2024-12-16 13:20:57.095964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:16:42.575 [2024-12-16 13:20:57.095974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.096053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.096060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.575 [2024-12-16 13:20:57.096067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:42.575 [2024-12-16 13:20:57.096073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.096090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.096096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.575 [2024-12-16 13:20:57.096103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.575 [2024-12-16 13:20:57.096109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.096134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:42.575 [2024-12-16 13:20:57.099227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.099250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.575 [2024-12-16 13:20:57.099258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:16:42.575 [2024-12-16 13:20:57.099266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.099298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.099305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.575 [2024-12-16 13:20:57.099310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:42.575 [2024-12-16 13:20:57.099316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.099330] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:42.575 [2024-12-16 13:20:57.099346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:42.575 [2024-12-16 13:20:57.099373] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:42.575 [2024-12-16 13:20:57.099387] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:42.575 [2024-12-16 13:20:57.099445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:42.575 [2024-12-16 13:20:57.099453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.575 [2024-12-16 13:20:57.099461] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:42.575 [2024-12-16 13:20:57.099469] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.575 [2024-12-16 13:20:57.099476] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.575 [2024-12-16 13:20:57.099483] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:42.575 [2024-12-16 13:20:57.099489] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.575 [2024-12-16 13:20:57.099495] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:42.575 [2024-12-16 13:20:57.099504] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:42.575 [2024-12-16 13:20:57.099511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.099517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.575 [2024-12-16 13:20:57.099522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:16:42.575 [2024-12-16 13:20:57.099528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.099578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.575 [2024-12-16 13:20:57.099585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.575 [2024-12-16 13:20:57.099591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:42.575 [2024-12-16 13:20:57.099597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.575 [2024-12-16 13:20:57.099663] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.575 [2024-12-16 13:20:57.099683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.575 [2024-12-16 13:20:57.099690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.575 [2024-12-16 13:20:57.099697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.575 [2024-12-16 13:20:57.099703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.575 [2024-12-16 13:20:57.099709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.575 [2024-12-16 13:20:57.099715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:42.575 [2024-12-16 13:20:57.099720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.575 [2024-12-16 13:20:57.099726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:42.575 [2024-12-16 13:20:57.099732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.575 [2024-12-16 13:20:57.099738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.576 [2024-12-16 13:20:57.099743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:42.576 [2024-12-16 13:20:57.099749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.576 [2024-12-16 13:20:57.099755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.576 [2024-12-16 13:20:57.099765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:42.576 [2024-12-16 13:20:57.099770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.576 [2024-12-16 13:20:57.099781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:42.576 [2024-12-16 13:20:57.099785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:42.576 [2024-12-16 13:20:57.099795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:42.576 [2024-12-16 13:20:57.099800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.576 [2024-12-16 13:20:57.099811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.576 [2024-12-16 13:20:57.099826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099835] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.576 [2024-12-16 13:20:57.099840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.576 [2024-12-16 13:20:57.099855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.576 [2024-12-16 13:20:57.099871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.576 [2024-12-16 13:20:57.099880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.576 [2024-12-16 13:20:57.099885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:42.576 [2024-12-16 13:20:57.099889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.576 [2024-12-16 13:20:57.099894] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.576 [2024-12-16 13:20:57.099903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.576 [2024-12-16 13:20:57.099909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.576 [2024-12-16 13:20:57.099923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.576 [2024-12-16 13:20:57.099928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.576 [2024-12-16 13:20:57.099933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.576 [2024-12-16 13:20:57.099939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.576 [2024-12-16 13:20:57.099944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.576 [2024-12-16 13:20:57.099950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.576 [2024-12-16 13:20:57.099956] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.576 [2024-12-16 13:20:57.099963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.576 [2024-12-16 13:20:57.099970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:42.576 [2024-12-16 13:20:57.099976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:42.576 [2024-12-16 13:20:57.099981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:42.576 [2024-12-16 13:20:57.099987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:42.576 [2024-12-16 13:20:57.099992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:42.576 [2024-12-16 13:20:57.099998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:42.576 [2024-12-16 13:20:57.100003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:42.576 [2024-12-16 13:20:57.100009] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:42.576 [2024-12-16 13:20:57.100014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:42.576 [2024-12-16 13:20:57.100020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:42.576 [2024-12-16 13:20:57.100025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:42.576 [2024-12-16 13:20:57.100030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:42.576 [2024-12-16 13:20:57.100036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:42.576 [2024-12-16 13:20:57.100042] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.576 [2024-12-16 13:20:57.100052] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.576 [2024-12-16 13:20:57.100059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.576 [2024-12-16 13:20:57.100064] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.576 [2024-12-16 13:20:57.100070] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.576 [2024-12-16 13:20:57.100075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.576 [2024-12-16 13:20:57.100081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.576 [2024-12-16 13:20:57.100086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.576 [2024-12-16 13:20:57.100094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:16:42.576 [2024-12-16 13:20:57.100101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.576 [2024-12-16 13:20:57.113984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.576 [2024-12-16 13:20:57.114013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.576 [2024-12-16 13:20:57.114021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.832 ms 00:16:42.576 [2024-12-16 13:20:57.114028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.576 [2024-12-16 13:20:57.114118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.576 [2024-12-16 13:20:57.114127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.576 [2024-12-16 13:20:57.114134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:42.576 [2024-12-16 13:20:57.114140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.158475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.158510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.838 [2024-12-16 13:20:57.158520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.317 ms 00:16:42.838 [2024-12-16 13:20:57.158527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.158585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.158594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.838 [2024-12-16 13:20:57.158604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:42.838 [2024-12-16 13:20:57.158611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.159005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.159026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.838 [2024-12-16 13:20:57.159033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:16:42.838 [2024-12-16 13:20:57.159040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.159143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.159151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.838 [2024-12-16 13:20:57.159158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:42.838 [2024-12-16 13:20:57.159165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.172136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.172162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.838 [2024-12-16 13:20:57.172169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:16:42.838 [2024-12-16 13:20:57.172177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.182662] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:42.838 [2024-12-16 13:20:57.182691] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:42.838 [2024-12-16 13:20:57.182701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.182707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:42.838 [2024-12-16 13:20:57.182715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.446 ms 00:16:42.838 [2024-12-16 13:20:57.182721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.201491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.201523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:42.838 [2024-12-16 13:20:57.201532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.716 ms 00:16:42.838 [2024-12-16 13:20:57.201538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.210405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.210433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:42.838 [2024-12-16 13:20:57.210447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.815 ms 00:16:42.838 [2024-12-16 13:20:57.210453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.838 [2024-12-16 13:20:57.219057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.838 [2024-12-16 13:20:57.219083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:42.839 [2024-12-16 13:20:57.219091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 00:16:42.839 [2024-12-16 13:20:57.219097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.219372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.219387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.839 [2024-12-16 13:20:57.219395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:16:42.839 [2024-12-16 13:20:57.219402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.267843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.267875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:42.839 [2024-12-16 13:20:57.267884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.423 ms 00:16:42.839 [2024-12-16 13:20:57.267894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.276017] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:42.839 [2024-12-16 13:20:57.290367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.290397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.839 [2024-12-16 13:20:57.290407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.412 ms 00:16:42.839 [2024-12-16 13:20:57.290414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.290471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.290479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:42.839 [2024-12-16 13:20:57.290489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.839 [2024-12-16 13:20:57.290496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.290540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.290547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.839 [2024-12-16 13:20:57.290554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:42.839 [2024-12-16 13:20:57.290560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.291592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.291620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:42.839 [2024-12-16 13:20:57.291641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:16:42.839 [2024-12-16 13:20:57.291647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.291674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.291684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.839 [2024-12-16 13:20:57.291690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.839 [2024-12-16 13:20:57.291696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.291726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:42.839 [2024-12-16 13:20:57.291734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.291740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:42.839 [2024-12-16 13:20:57.291746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:42.839 [2024-12-16 13:20:57.291752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.310453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.310481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.839 [2024-12-16 13:20:57.310490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.682 ms 00:16:42.839 [2024-12-16 13:20:57.310496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.310566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.839 [2024-12-16 13:20:57.310575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.839 [2024-12-16 13:20:57.310582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:42.839 [2024-12-16 13:20:57.310587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.839 [2024-12-16 13:20:57.311617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.839 [2024-12-16 13:20:57.314088] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 236.263 ms, result 0 00:16:42.839 [2024-12-16 13:20:57.314924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.839 [2024-12-16 13:20:57.326026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:43.102  [2024-12-16T13:20:57.676Z] Copying: 4096/4096 [kB] (average 33 MBps)[2024-12-16 13:20:57.448257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.102 [2024-12-16 13:20:57.454883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.454915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.102 [2024-12-16 13:20:57.454924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:43.102 [2024-12-16 13:20:57.454931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.454948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:43.102 [2024-12-16 13:20:57.457143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.457167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.102 [2024-12-16 13:20:57.457175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:16:43.102 [2024-12-16 13:20:57.457183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.458825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.458851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.102 [2024-12-16 13:20:57.458858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.625 ms 00:16:43.102 [2024-12-16 13:20:57.458868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.461987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.462010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.102 [2024-12-16 13:20:57.462018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:16:43.102 [2024-12-16 13:20:57.462024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.467232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.467257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:43.102 [2024-12-16 13:20:57.467264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:16:43.102 [2024-12-16 13:20:57.467274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.484489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.484515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.102 [2024-12-16 13:20:57.484523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.170 ms 00:16:43.102 [2024-12-16 13:20:57.484528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.496307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.496333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.102 [2024-12-16 13:20:57.496342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.737 ms 00:16:43.102 [2024-12-16 13:20:57.496349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.496451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.496459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.102 [2024-12-16 13:20:57.496465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:43.102 [2024-12-16 13:20:57.496471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.514409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.514435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:43.102 [2024-12-16 13:20:57.514443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.926 ms 00:16:43.102 [2024-12-16 13:20:57.514448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.532379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.532405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:43.102 [2024-12-16 13:20:57.532412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.888 ms 00:16:43.102 [2024-12-16 13:20:57.532417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.549458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.549484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.102 [2024-12-16 13:20:57.549491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.006 ms 00:16:43.102 [2024-12-16 13:20:57.549496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.567533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.102 [2024-12-16 13:20:57.567559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.102 [2024-12-16 13:20:57.567567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.983 ms 00:16:43.102 [2024-12-16 13:20:57.567573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.102 [2024-12-16 13:20:57.567607] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.102 [2024-12-16 13:20:57.567619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.102 [2024-12-16 13:20:57.567994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.103 [2024-12-16 13:20:57.568214] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.103 [2024-12-16 13:20:57.568220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:16:43.103 [2024-12-16 13:20:57.568226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.103 [2024-12-16 13:20:57.568232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.103 [2024-12-16 13:20:57.568239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.103 [2024-12-16 13:20:57.568245] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.103 [2024-12-16 13:20:57.568252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.103 [2024-12-16 13:20:57.568258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.103 [2024-12-16 13:20:57.568264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.103 [2024-12-16 13:20:57.568269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.103 [2024-12-16 13:20:57.568274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.103 [2024-12-16 13:20:57.568279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.103 [2024-12-16 13:20:57.568285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.103 [2024-12-16 13:20:57.568292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:16:43.103 [2024-12-16 13:20:57.568298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.577972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.103 [2024-12-16 13:20:57.577997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.103 [2024-12-16 13:20:57.578010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.660 ms 00:16:43.103 [2024-12-16 13:20:57.578015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.578187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.103 [2024-12-16 13:20:57.578200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.103 [2024-12-16 13:20:57.578206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:16:43.103 [2024-12-16 13:20:57.578211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.609651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.103 [2024-12-16 13:20:57.609679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.103 [2024-12-16 13:20:57.609691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.103 [2024-12-16 13:20:57.609697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.609760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.103 [2024-12-16 13:20:57.609768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.103 [2024-12-16 13:20:57.609775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.103 [2024-12-16 13:20:57.609781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.609815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.103 [2024-12-16 13:20:57.609823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.103 [2024-12-16 13:20:57.609829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.103 [2024-12-16 13:20:57.609839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.609854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.103 [2024-12-16 13:20:57.609859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.103 [2024-12-16 13:20:57.609866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.103 [2024-12-16 13:20:57.609871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.103 [2024-12-16 13:20:57.671170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.103 [2024-12-16 13:20:57.671205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.103 [2024-12-16 13:20:57.671217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.103 [2024-12-16 13:20:57.671223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.694834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.694864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.364 [2024-12-16 13:20:57.694872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.694879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.694924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.694932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.364 [2024-12-16 13:20:57.694939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.694946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.694975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.694982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.364 [2024-12-16 13:20:57.694988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.694994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.695072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.695082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.364 [2024-12-16 13:20:57.695088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.695094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.695121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.695128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.364 [2024-12-16 13:20:57.695135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.695141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.695175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.695183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.364 [2024-12-16 13:20:57.695189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.695195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.695237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.364 [2024-12-16 13:20:57.695253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.364 [2024-12-16 13:20:57.695259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.364 [2024-12-16 13:20:57.695265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.364 [2024-12-16 13:20:57.695391] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 240.498 ms, result 0 00:16:43.935 00:16:43.935 00:16:43.935 13:20:58 -- ftl/trim.sh@93 -- # svcpid=72320 00:16:43.935 13:20:58 -- ftl/trim.sh@94 -- # waitforlisten 72320 00:16:43.935 13:20:58 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:43.935 13:20:58 -- common/autotest_common.sh@829 -- # '[' -z 72320 ']' 00:16:43.935 13:20:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.935 13:20:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.935 13:20:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.935 13:20:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.935 13:20:58 -- common/autotest_common.sh@10 -- # set +x 00:16:43.935 [2024-12-16 13:20:58.469029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:43.935 [2024-12-16 13:20:58.469520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72320 ] 00:16:44.196 [2024-12-16 13:20:58.618576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.456 [2024-12-16 13:20:58.809255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.456 [2024-12-16 13:20:58.809441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.401 13:20:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.401 13:20:59 -- common/autotest_common.sh@862 -- # return 0 00:16:45.401 13:20:59 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:45.661 [2024-12-16 13:21:00.163589] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.661 [2024-12-16 13:21:00.163652] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.925 [2024-12-16 13:21:00.331669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.331719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.925 [2024-12-16 13:21:00.331735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.925 [2024-12-16 13:21:00.331743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.925 [2024-12-16 13:21:00.334413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.334461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.925 [2024-12-16 13:21:00.334474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.650 ms 00:16:45.925 [2024-12-16 13:21:00.334481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.925 [2024-12-16 13:21:00.334557] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.925 [2024-12-16 13:21:00.335301] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.925 [2024-12-16 13:21:00.335334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.335342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.925 [2024-12-16 13:21:00.335353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:16:45.925 [2024-12-16 13:21:00.335360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.925 [2024-12-16 13:21:00.336702] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:45.925 [2024-12-16 13:21:00.349918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.349958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:45.925 [2024-12-16 13:21:00.349969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.222 ms 00:16:45.925 [2024-12-16 13:21:00.349978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.925 [2024-12-16 13:21:00.350058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.350070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:45.925 [2024-12-16 13:21:00.350079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:45.925 [2024-12-16 13:21:00.350090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.925 [2024-12-16 13:21:00.355571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.355607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.925 [2024-12-16 13:21:00.355616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.426 ms 00:16:45.925 [2024-12-16 13:21:00.355637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.925 [2024-12-16 13:21:00.355720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.925 [2024-12-16 13:21:00.355732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.925 [2024-12-16 13:21:00.355741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:45.925 [2024-12-16 13:21:00.355749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.926 [2024-12-16 13:21:00.355777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.926 [2024-12-16 13:21:00.355786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.926 [2024-12-16 13:21:00.355794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:45.926 [2024-12-16 13:21:00.355804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.926 [2024-12-16 13:21:00.355832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.926 [2024-12-16 13:21:00.359407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.926 [2024-12-16 13:21:00.359435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.926 [2024-12-16 13:21:00.359446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.583 ms 00:16:45.926 [2024-12-16 13:21:00.359453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.926 [2024-12-16 13:21:00.359494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.926 [2024-12-16 13:21:00.359502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.926 [2024-12-16 13:21:00.359512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:45.926 [2024-12-16 13:21:00.359522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.926 [2024-12-16 13:21:00.359544] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:45.926 [2024-12-16 13:21:00.359562] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:45.926 [2024-12-16 13:21:00.359597] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:45.926 [2024-12-16 13:21:00.359611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:45.926 [2024-12-16 13:21:00.359698] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:45.926 [2024-12-16 13:21:00.359710] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.926 [2024-12-16 13:21:00.359725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:45.926 [2024-12-16 13:21:00.359734] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.926 [2024-12-16 13:21:00.359744] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.926 [2024-12-16 13:21:00.359752] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.926 [2024-12-16 13:21:00.359760] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.926 [2024-12-16 13:21:00.359768] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:45.926 [2024-12-16 13:21:00.359779] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:45.926 [2024-12-16 13:21:00.359786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.926 [2024-12-16 13:21:00.359794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.926 [2024-12-16 13:21:00.359801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:16:45.926 [2024-12-16 13:21:00.359810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.926 [2024-12-16 13:21:00.359888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.926 [2024-12-16 13:21:00.359899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.926 [2024-12-16 13:21:00.359907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:45.926 [2024-12-16 13:21:00.359917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.926 [2024-12-16 13:21:00.359991] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.926 [2024-12-16 13:21:00.360009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.926 [2024-12-16 13:21:00.360017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.926 [2024-12-16 13:21:00.360042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.926 [2024-12-16 13:21:00.360068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.926 [2024-12-16 13:21:00.360082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.926 [2024-12-16 13:21:00.360091] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.926 [2024-12-16 13:21:00.360097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.926 [2024-12-16 13:21:00.360106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.926 [2024-12-16 13:21:00.360113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:45.926 [2024-12-16 13:21:00.360121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.926 [2024-12-16 13:21:00.360136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:45.926 [2024-12-16 13:21:00.360142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:45.926 [2024-12-16 13:21:00.360156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:45.926 [2024-12-16 13:21:00.360164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.926 [2024-12-16 13:21:00.360180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.926 [2024-12-16 13:21:00.360206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360220] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.926 [2024-12-16 13:21:00.360228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.926 [2024-12-16 13:21:00.360251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.926 [2024-12-16 13:21:00.360273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.926 [2024-12-16 13:21:00.360287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.926 [2024-12-16 13:21:00.360294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:45.926 [2024-12-16 13:21:00.360304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.926 [2024-12-16 13:21:00.360310] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.926 [2024-12-16 13:21:00.360321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.926 [2024-12-16 13:21:00.360329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.926 [2024-12-16 13:21:00.360345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.926 [2024-12-16 13:21:00.360353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.926 [2024-12-16 13:21:00.360360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.926 [2024-12-16 13:21:00.360368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.926 [2024-12-16 13:21:00.360374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.926 [2024-12-16 13:21:00.360382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.926 [2024-12-16 13:21:00.360390] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.926 [2024-12-16 13:21:00.360400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.926 [2024-12-16 13:21:00.360408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.926 [2024-12-16 13:21:00.360417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:45.926 [2024-12-16 13:21:00.360423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.926 [2024-12-16 13:21:00.360434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:45.926 [2024-12-16 13:21:00.360441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:45.926 [2024-12-16 13:21:00.360449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:45.926 [2024-12-16 13:21:00.360456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:45.926 [2024-12-16 13:21:00.360464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:45.926 [2024-12-16 13:21:00.360471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:45.926 [2024-12-16 13:21:00.360479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:45.926 [2024-12-16 13:21:00.360486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:45.926 [2024-12-16 13:21:00.360494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:45.926 [2024-12-16 13:21:00.360501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:45.926 [2024-12-16 13:21:00.360509] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.926 [2024-12-16 13:21:00.360516] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.927 [2024-12-16 13:21:00.360526] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.927 [2024-12-16 13:21:00.360534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.927 [2024-12-16 13:21:00.360543] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.927 [2024-12-16 13:21:00.360550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.927 [2024-12-16 13:21:00.360571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.360579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.927 [2024-12-16 13:21:00.360587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:16:45.927 [2024-12-16 13:21:00.360594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.376252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.376286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.927 [2024-12-16 13:21:00.376300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.573 ms 00:16:45.927 [2024-12-16 13:21:00.376310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.376430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.376439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.927 [2024-12-16 13:21:00.376449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:45.927 [2024-12-16 13:21:00.376455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.408749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.408786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.927 [2024-12-16 13:21:00.408799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.271 ms 00:16:45.927 [2024-12-16 13:21:00.408807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.408882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.408897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.927 [2024-12-16 13:21:00.408907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:45.927 [2024-12-16 13:21:00.408914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.409310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.409350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.927 [2024-12-16 13:21:00.409364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:16:45.927 [2024-12-16 13:21:00.409371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.409495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.409504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.927 [2024-12-16 13:21:00.409516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:45.927 [2024-12-16 13:21:00.409524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.426404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.426446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.927 [2024-12-16 13:21:00.426461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.857 ms 00:16:45.927 [2024-12-16 13:21:00.426469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.440522] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:45.927 [2024-12-16 13:21:00.440603] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:45.927 [2024-12-16 13:21:00.440620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.440639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:45.927 [2024-12-16 13:21:00.440651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.035 ms 00:16:45.927 [2024-12-16 13:21:00.440658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.466854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.466903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:45.927 [2024-12-16 13:21:00.466918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.102 ms 00:16:45.927 [2024-12-16 13:21:00.466926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.480325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.480379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:45.927 [2024-12-16 13:21:00.480394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.323 ms 00:16:45.927 [2024-12-16 13:21:00.480401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.493200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.493244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:45.927 [2024-12-16 13:21:00.493263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.704 ms 00:16:45.927 [2024-12-16 13:21:00.493270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.927 [2024-12-16 13:21:00.493687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.927 [2024-12-16 13:21:00.493701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.927 [2024-12-16 13:21:00.493716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:16:45.927 [2024-12-16 13:21:00.493724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.571744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.571825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:46.188 [2024-12-16 13:21:00.571853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.984 ms 00:16:46.188 [2024-12-16 13:21:00.571863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.584099] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:46.188 [2024-12-16 13:21:00.610094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.610158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:46.188 [2024-12-16 13:21:00.610174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.068 ms 00:16:46.188 [2024-12-16 13:21:00.610185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.610299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.610317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:46.188 [2024-12-16 13:21:00.610327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:46.188 [2024-12-16 13:21:00.610343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.610414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.610429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:46.188 [2024-12-16 13:21:00.610438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:46.188 [2024-12-16 13:21:00.610449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.612033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.612083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:46.188 [2024-12-16 13:21:00.612094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:16:46.188 [2024-12-16 13:21:00.612104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.612156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.612171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:46.188 [2024-12-16 13:21:00.612180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:46.188 [2024-12-16 13:21:00.612191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.612240] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:46.188 [2024-12-16 13:21:00.612258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.612266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:46.188 [2024-12-16 13:21:00.612277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:46.188 [2024-12-16 13:21:00.612286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.640022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.640080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:46.188 [2024-12-16 13:21:00.640098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.702 ms 00:16:46.188 [2024-12-16 13:21:00.640107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.640234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.188 [2024-12-16 13:21:00.640248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:46.188 [2024-12-16 13:21:00.640260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:46.188 [2024-12-16 13:21:00.640271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.188 [2024-12-16 13:21:00.641774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.189 [2024-12-16 13:21:00.645561] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.677 ms, result 0 00:16:46.189 [2024-12-16 13:21:00.647918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:46.189 Some configs were skipped because the RPC state that can call them passed over. 00:16:46.189 13:21:00 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:46.450 [2024-12-16 13:21:00.896990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.450 [2024-12-16 13:21:00.897059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:46.450 [2024-12-16 13:21:00.897073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.424 ms 00:16:46.450 [2024-12-16 13:21:00.897084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.450 [2024-12-16 13:21:00.897128] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 27.563 ms, result 0 00:16:46.450 true 00:16:46.450 13:21:00 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:46.710 [2024-12-16 13:21:01.128517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.710 [2024-12-16 13:21:01.128593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:46.710 [2024-12-16 13:21:01.128609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.145 ms 00:16:46.710 [2024-12-16 13:21:01.128618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.710 [2024-12-16 13:21:01.128680] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 27.308 ms, result 0 00:16:46.710 true 00:16:46.710 13:21:01 -- ftl/trim.sh@102 -- # killprocess 72320 00:16:46.710 13:21:01 -- common/autotest_common.sh@936 -- # '[' -z 72320 ']' 00:16:46.710 13:21:01 -- common/autotest_common.sh@940 -- # kill -0 72320 00:16:46.710 13:21:01 -- common/autotest_common.sh@941 -- # uname 00:16:46.710 13:21:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:46.710 13:21:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72320 00:16:46.710 13:21:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:46.710 killing process with pid 72320 00:16:46.710 13:21:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:46.710 13:21:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72320' 00:16:46.710 13:21:01 -- common/autotest_common.sh@955 -- # kill 72320 00:16:46.710 13:21:01 -- common/autotest_common.sh@960 -- # wait 72320 00:16:47.282 [2024-12-16 13:21:01.780943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.780994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:47.282 [2024-12-16 13:21:01.781005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:47.282 [2024-12-16 13:21:01.781015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.282 [2024-12-16 13:21:01.781033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:47.282 [2024-12-16 13:21:01.783023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.783050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:47.282 [2024-12-16 13:21:01.783063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:16:47.282 [2024-12-16 13:21:01.783069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.282 [2024-12-16 13:21:01.783295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.783309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:47.282 [2024-12-16 13:21:01.783319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:47.282 [2024-12-16 13:21:01.783325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.282 [2024-12-16 13:21:01.786512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.786538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:47.282 [2024-12-16 13:21:01.786548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:16:47.282 [2024-12-16 13:21:01.786555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.282 [2024-12-16 13:21:01.791846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.791890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:47.282 [2024-12-16 13:21:01.791899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.263 ms 00:16:47.282 [2024-12-16 13:21:01.791908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.282 [2024-12-16 13:21:01.799596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.799622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:47.282 [2024-12-16 13:21:01.799642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.644 ms 00:16:47.282 [2024-12-16 13:21:01.799648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.282 [2024-12-16 13:21:01.806689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.282 [2024-12-16 13:21:01.806719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:47.282 [2024-12-16 13:21:01.806728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.009 ms 00:16:47.282 [2024-12-16 13:21:01.806735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.283 [2024-12-16 13:21:01.806833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.283 [2024-12-16 13:21:01.806840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:47.283 [2024-12-16 13:21:01.806848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:47.283 [2024-12-16 13:21:01.806855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.283 [2024-12-16 13:21:01.814915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.283 [2024-12-16 13:21:01.814941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:47.283 [2024-12-16 13:21:01.814950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.043 ms 00:16:47.283 [2024-12-16 13:21:01.814956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.283 [2024-12-16 13:21:01.822539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.283 [2024-12-16 13:21:01.822566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:47.283 [2024-12-16 13:21:01.822578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:16:47.283 [2024-12-16 13:21:01.822583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.283 [2024-12-16 13:21:01.829905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.283 [2024-12-16 13:21:01.829931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:47.283 [2024-12-16 13:21:01.829940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.289 ms 00:16:47.283 [2024-12-16 13:21:01.829945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.283 [2024-12-16 13:21:01.837043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.283 [2024-12-16 13:21:01.837069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:47.283 [2024-12-16 13:21:01.837077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:16:47.283 [2024-12-16 13:21:01.837082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.283 [2024-12-16 13:21:01.837118] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:47.283 [2024-12-16 13:21:01.837131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:47.283 [2024-12-16 13:21:01.837611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:47.284 [2024-12-16 13:21:01.837814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:47.284 [2024-12-16 13:21:01.837823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:16:47.284 [2024-12-16 13:21:01.837829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:47.284 [2024-12-16 13:21:01.837836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:47.284 [2024-12-16 13:21:01.837843] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:47.284 [2024-12-16 13:21:01.837850] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:47.284 [2024-12-16 13:21:01.837855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:47.284 [2024-12-16 13:21:01.837863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:47.284 [2024-12-16 13:21:01.837869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:47.284 [2024-12-16 13:21:01.837876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:47.284 [2024-12-16 13:21:01.837882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:47.284 [2024-12-16 13:21:01.837890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.284 [2024-12-16 13:21:01.837896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:47.284 [2024-12-16 13:21:01.837903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:16:47.284 [2024-12-16 13:21:01.837910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.284 [2024-12-16 13:21:01.848082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.284 [2024-12-16 13:21:01.848109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:47.284 [2024-12-16 13:21:01.848120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.154 ms 00:16:47.284 [2024-12-16 13:21:01.848126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.284 [2024-12-16 13:21:01.848306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.284 [2024-12-16 13:21:01.848320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:47.284 [2024-12-16 13:21:01.848330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:47.284 [2024-12-16 13:21:01.848336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.885705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.885735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.544 [2024-12-16 13:21:01.885745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.885751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.885821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.885828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.544 [2024-12-16 13:21:01.885838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.885843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.885878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.885886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.544 [2024-12-16 13:21:01.885895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.885902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.885918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.885925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.544 [2024-12-16 13:21:01.885931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.885939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.949529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.949564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.544 [2024-12-16 13:21:01.949575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.949582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.973590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.544 [2024-12-16 13:21:01.973602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.973609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.973677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.544 [2024-12-16 13:21:01.973687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.973693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.973727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.544 [2024-12-16 13:21:01.973735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.973741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.973827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.544 [2024-12-16 13:21:01.973836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.973842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.973879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:47.544 [2024-12-16 13:21:01.973887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.973893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.973940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.544 [2024-12-16 13:21:01.973949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.544 [2024-12-16 13:21:01.973955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.544 [2024-12-16 13:21:01.973998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.544 [2024-12-16 13:21:01.974007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.544 [2024-12-16 13:21:01.974015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.545 [2024-12-16 13:21:01.974021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.545 [2024-12-16 13:21:01.974147] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 193.182 ms, result 0 00:16:48.487 13:21:02 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:48.487 [2024-12-16 13:21:02.753067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.487 [2024-12-16 13:21:02.753189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72380 ] 00:16:48.487 [2024-12-16 13:21:02.904425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.748 [2024-12-16 13:21:03.136380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.009 [2024-12-16 13:21:03.425842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.009 [2024-12-16 13:21:03.425923] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.272 [2024-12-16 13:21:03.588786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.588849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:49.272 [2024-12-16 13:21:03.588864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:49.272 [2024-12-16 13:21:03.588872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.591811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.591864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.272 [2024-12-16 13:21:03.591875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:16:49.272 [2024-12-16 13:21:03.591883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.591994] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:49.272 [2024-12-16 13:21:03.592781] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:49.272 [2024-12-16 13:21:03.592812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.592821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.272 [2024-12-16 13:21:03.592831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:16:49.272 [2024-12-16 13:21:03.592838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.594509] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:49.272 [2024-12-16 13:21:03.608890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.608939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:49.272 [2024-12-16 13:21:03.608953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.381 ms 00:16:49.272 [2024-12-16 13:21:03.608962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.609074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.609086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:49.272 [2024-12-16 13:21:03.609096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:49.272 [2024-12-16 13:21:03.609103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.617216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.617260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.272 [2024-12-16 13:21:03.617270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.067 ms 00:16:49.272 [2024-12-16 13:21:03.617285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.617404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.617415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.272 [2024-12-16 13:21:03.617425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:49.272 [2024-12-16 13:21:03.617433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.617462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.617471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:49.272 [2024-12-16 13:21:03.617479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:49.272 [2024-12-16 13:21:03.617486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.617518] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:49.272 [2024-12-16 13:21:03.621674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.621710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.272 [2024-12-16 13:21:03.621720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.171 ms 00:16:49.272 [2024-12-16 13:21:03.621731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.621808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.621818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:49.272 [2024-12-16 13:21:03.621827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:49.272 [2024-12-16 13:21:03.621834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.621854] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:49.272 [2024-12-16 13:21:03.621875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:49.272 [2024-12-16 13:21:03.621910] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:49.272 [2024-12-16 13:21:03.621929] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:49.272 [2024-12-16 13:21:03.622005] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:49.272 [2024-12-16 13:21:03.622016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:49.272 [2024-12-16 13:21:03.622026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:49.272 [2024-12-16 13:21:03.622036] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:49.272 [2024-12-16 13:21:03.622045] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:49.272 [2024-12-16 13:21:03.622054] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:49.272 [2024-12-16 13:21:03.622061] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:49.272 [2024-12-16 13:21:03.622069] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:49.272 [2024-12-16 13:21:03.622080] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:49.272 [2024-12-16 13:21:03.622088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.622096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:49.272 [2024-12-16 13:21:03.622104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:16:49.272 [2024-12-16 13:21:03.622111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.622177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.272 [2024-12-16 13:21:03.622186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:49.272 [2024-12-16 13:21:03.622194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:49.272 [2024-12-16 13:21:03.622201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.272 [2024-12-16 13:21:03.622279] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:49.272 [2024-12-16 13:21:03.622288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:49.272 [2024-12-16 13:21:03.622297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.272 [2024-12-16 13:21:03.622305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.272 [2024-12-16 13:21:03.622313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:49.272 [2024-12-16 13:21:03.622320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:49.272 [2024-12-16 13:21:03.622327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:49.272 [2024-12-16 13:21:03.622335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:49.272 [2024-12-16 13:21:03.622342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:49.272 [2024-12-16 13:21:03.622348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.272 [2024-12-16 13:21:03.622355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:49.272 [2024-12-16 13:21:03.622364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:49.272 [2024-12-16 13:21:03.622371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.272 [2024-12-16 13:21:03.622378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:49.272 [2024-12-16 13:21:03.622393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:49.272 [2024-12-16 13:21:03.622399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622406] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:49.273 [2024-12-16 13:21:03.622413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:49.273 [2024-12-16 13:21:03.622421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:49.273 [2024-12-16 13:21:03.622434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:49.273 [2024-12-16 13:21:03.622441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:49.273 [2024-12-16 13:21:03.622455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:49.273 [2024-12-16 13:21:03.622473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:49.273 [2024-12-16 13:21:03.622493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:49.273 [2024-12-16 13:21:03.622512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:49.273 [2024-12-16 13:21:03.622532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.273 [2024-12-16 13:21:03.622545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:49.273 [2024-12-16 13:21:03.622551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:49.273 [2024-12-16 13:21:03.622557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.273 [2024-12-16 13:21:03.622563] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:49.273 [2024-12-16 13:21:03.622571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:49.273 [2024-12-16 13:21:03.622580] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.273 [2024-12-16 13:21:03.622601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:49.273 [2024-12-16 13:21:03.622608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:49.273 [2024-12-16 13:21:03.622615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:49.273 [2024-12-16 13:21:03.622622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:49.273 [2024-12-16 13:21:03.622646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:49.273 [2024-12-16 13:21:03.622653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:49.273 [2024-12-16 13:21:03.622662] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:49.273 [2024-12-16 13:21:03.622672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.273 [2024-12-16 13:21:03.622681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:49.273 [2024-12-16 13:21:03.622689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:49.273 [2024-12-16 13:21:03.622696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:49.273 [2024-12-16 13:21:03.622703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:49.273 [2024-12-16 13:21:03.622710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:49.273 [2024-12-16 13:21:03.622718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:49.273 [2024-12-16 13:21:03.622726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:49.273 [2024-12-16 13:21:03.622733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:49.273 [2024-12-16 13:21:03.622740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:49.273 [2024-12-16 13:21:03.622747] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:49.273 [2024-12-16 13:21:03.622754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:49.273 [2024-12-16 13:21:03.622762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:49.273 [2024-12-16 13:21:03.622770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:49.273 [2024-12-16 13:21:03.622777] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:49.273 [2024-12-16 13:21:03.622790] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.273 [2024-12-16 13:21:03.622799] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:49.273 [2024-12-16 13:21:03.622807] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:49.273 [2024-12-16 13:21:03.622815] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:49.273 [2024-12-16 13:21:03.622822] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:49.273 [2024-12-16 13:21:03.622829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.622837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:49.273 [2024-12-16 13:21:03.622844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:16:49.273 [2024-12-16 13:21:03.622852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.640971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.641016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.273 [2024-12-16 13:21:03.641029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.072 ms 00:16:49.273 [2024-12-16 13:21:03.641038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.641168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.641180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:49.273 [2024-12-16 13:21:03.641190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:49.273 [2024-12-16 13:21:03.641200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.688114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.688167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.273 [2024-12-16 13:21:03.688180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.889 ms 00:16:49.273 [2024-12-16 13:21:03.688189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.688272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.688282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.273 [2024-12-16 13:21:03.688297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.273 [2024-12-16 13:21:03.688305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.688901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.688933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.273 [2024-12-16 13:21:03.688943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:16:49.273 [2024-12-16 13:21:03.688952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.689090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.689101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.273 [2024-12-16 13:21:03.689111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:49.273 [2024-12-16 13:21:03.689118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.706223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.706265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.273 [2024-12-16 13:21:03.706276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.079 ms 00:16:49.273 [2024-12-16 13:21:03.706287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.720926] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:49.273 [2024-12-16 13:21:03.720974] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:49.273 [2024-12-16 13:21:03.720986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.720994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:49.273 [2024-12-16 13:21:03.721004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.585 ms 00:16:49.273 [2024-12-16 13:21:03.721011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.747645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.273 [2024-12-16 13:21:03.747699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:49.273 [2024-12-16 13:21:03.747711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.542 ms 00:16:49.273 [2024-12-16 13:21:03.747719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.273 [2024-12-16 13:21:03.761135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.274 [2024-12-16 13:21:03.761180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:49.274 [2024-12-16 13:21:03.761201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.324 ms 00:16:49.274 [2024-12-16 13:21:03.761209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.274 [2024-12-16 13:21:03.774263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.274 [2024-12-16 13:21:03.774307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:49.274 [2024-12-16 13:21:03.774319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.969 ms 00:16:49.274 [2024-12-16 13:21:03.774326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.274 [2024-12-16 13:21:03.774744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.274 [2024-12-16 13:21:03.774774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:49.274 [2024-12-16 13:21:03.774784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:16:49.274 [2024-12-16 13:21:03.774796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.274 [2024-12-16 13:21:03.837945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.274 [2024-12-16 13:21:03.837987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:49.274 [2024-12-16 13:21:03.837998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.123 ms 00:16:49.274 [2024-12-16 13:21:03.838010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.848479] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:49.536 [2024-12-16 13:21:03.862932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.862966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:49.536 [2024-12-16 13:21:03.862977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.848 ms 00:16:49.536 [2024-12-16 13:21:03.862985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.863050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.863059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:49.536 [2024-12-16 13:21:03.863070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:49.536 [2024-12-16 13:21:03.863078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.863124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.863133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:49.536 [2024-12-16 13:21:03.863140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:49.536 [2024-12-16 13:21:03.863147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.864337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.864371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:49.536 [2024-12-16 13:21:03.864380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:16:49.536 [2024-12-16 13:21:03.864387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.864417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.864427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:49.536 [2024-12-16 13:21:03.864435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.536 [2024-12-16 13:21:03.864442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.864474] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:49.536 [2024-12-16 13:21:03.864483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.864491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:49.536 [2024-12-16 13:21:03.864498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:49.536 [2024-12-16 13:21:03.864506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.888869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.888904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:49.536 [2024-12-16 13:21:03.888915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.341 ms 00:16:49.536 [2024-12-16 13:21:03.888923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.889009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.536 [2024-12-16 13:21:03.889020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:49.536 [2024-12-16 13:21:03.889028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:49.536 [2024-12-16 13:21:03.889035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.536 [2024-12-16 13:21:03.890239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:49.536 [2024-12-16 13:21:03.893508] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.171 ms, result 0 00:16:49.536 [2024-12-16 13:21:03.894799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.536 [2024-12-16 13:21:03.908123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:50.511  [2024-12-16T13:21:06.028Z] Copying: 14/256 [MB] (14 MBps) [2024-12-16T13:21:06.973Z] Copying: 29/256 [MB] (15 MBps) [2024-12-16T13:21:08.361Z] Copying: 51/256 [MB] (21 MBps) [2024-12-16T13:21:09.305Z] Copying: 69/256 [MB] (18 MBps) [2024-12-16T13:21:10.249Z] Copying: 94/256 [MB] (24 MBps) [2024-12-16T13:21:11.194Z] Copying: 115/256 [MB] (21 MBps) [2024-12-16T13:21:12.139Z] Copying: 145/256 [MB] (29 MBps) [2024-12-16T13:21:13.083Z] Copying: 163/256 [MB] (18 MBps) [2024-12-16T13:21:14.026Z] Copying: 181/256 [MB] (17 MBps) [2024-12-16T13:21:15.411Z] Copying: 211/256 [MB] (29 MBps) [2024-12-16T13:21:15.672Z] Copying: 240/256 [MB] (29 MBps) [2024-12-16T13:21:15.934Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-16 13:21:15.740308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:01.360 [2024-12-16 13:21:15.757798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.757871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:01.360 [2024-12-16 13:21:15.757889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:01.360 [2024-12-16 13:21:15.757898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.757929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:01.360 [2024-12-16 13:21:15.761190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.761234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:01.360 [2024-12-16 13:21:15.761247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:17:01.360 [2024-12-16 13:21:15.761256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.761580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.761593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:01.360 [2024-12-16 13:21:15.761603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:01.360 [2024-12-16 13:21:15.761616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.765361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.765389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:01.360 [2024-12-16 13:21:15.765400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:17:01.360 [2024-12-16 13:21:15.765410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.772335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.772377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:01.360 [2024-12-16 13:21:15.772390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.887 ms 00:17:01.360 [2024-12-16 13:21:15.772400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.799086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.799137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:01.360 [2024-12-16 13:21:15.799150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.598 ms 00:17:01.360 [2024-12-16 13:21:15.799158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.816274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.816322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:01.360 [2024-12-16 13:21:15.816335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.048 ms 00:17:01.360 [2024-12-16 13:21:15.816344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.816523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.816538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:01.360 [2024-12-16 13:21:15.816548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:01.360 [2024-12-16 13:21:15.816559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.842202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.842249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:01.360 [2024-12-16 13:21:15.842261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:17:01.360 [2024-12-16 13:21:15.842268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.867641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.867686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:01.360 [2024-12-16 13:21:15.867698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.309 ms 00:17:01.360 [2024-12-16 13:21:15.867706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.892809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.892858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:01.360 [2024-12-16 13:21:15.892869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.041 ms 00:17:01.360 [2024-12-16 13:21:15.892877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.917439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.360 [2024-12-16 13:21:15.917485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:01.360 [2024-12-16 13:21:15.917499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.460 ms 00:17:01.360 [2024-12-16 13:21:15.917506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.360 [2024-12-16 13:21:15.917568] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:01.360 [2024-12-16 13:21:15.917588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:01.360 [2024-12-16 13:21:15.917715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.917995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:01.361 [2024-12-16 13:21:15.918449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:01.362 [2024-12-16 13:21:15.918468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:01.362 [2024-12-16 13:21:15.918476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:01.362 [2024-12-16 13:21:15.918485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:01.362 [2024-12-16 13:21:15.918493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:01.362 [2024-12-16 13:21:15.918510] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:01.362 [2024-12-16 13:21:15.918519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8943dfba-ecda-4808-8e9b-139434979057 00:17:01.362 [2024-12-16 13:21:15.918528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:01.362 [2024-12-16 13:21:15.918537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:01.362 [2024-12-16 13:21:15.918545] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:01.362 [2024-12-16 13:21:15.918554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:01.362 [2024-12-16 13:21:15.918562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:01.362 [2024-12-16 13:21:15.918574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:01.362 [2024-12-16 13:21:15.918582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:01.362 [2024-12-16 13:21:15.918589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:01.362 [2024-12-16 13:21:15.918597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:01.362 [2024-12-16 13:21:15.918605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.362 [2024-12-16 13:21:15.918614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:01.362 [2024-12-16 13:21:15.918637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:17:01.362 [2024-12-16 13:21:15.918647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:15.933000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.624 [2024-12-16 13:21:15.933043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:01.624 [2024-12-16 13:21:15.933063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.315 ms 00:17:01.624 [2024-12-16 13:21:15.933071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:15.933330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.624 [2024-12-16 13:21:15.933341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:01.624 [2024-12-16 13:21:15.933350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:01.624 [2024-12-16 13:21:15.933358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:15.978255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:15.978308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.624 [2024-12-16 13:21:15.978327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:15.978336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:15.978442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:15.978454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.624 [2024-12-16 13:21:15.978462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:15.978471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:15.978525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:15.978536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.624 [2024-12-16 13:21:15.978545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:15.978559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:15.978580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:15.978590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.624 [2024-12-16 13:21:15.978598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:15.978606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.065596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.065659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.624 [2024-12-16 13:21:16.065677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.065685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.624 [2024-12-16 13:21:16.100320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.624 [2024-12-16 13:21:16.100412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.624 [2024-12-16 13:21:16.100484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.624 [2024-12-16 13:21:16.100683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:01.624 [2024-12-16 13:21:16.100759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.624 [2024-12-16 13:21:16.100842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.100917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.624 [2024-12-16 13:21:16.100932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.624 [2024-12-16 13:21:16.100942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.624 [2024-12-16 13:21:16.100950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.624 [2024-12-16 13:21:16.101144] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.359 ms, result 0 00:17:02.566 00:17:02.566 00:17:02.566 13:21:17 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:03.138 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:03.138 13:21:17 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:03.138 13:21:17 -- ftl/trim.sh@109 -- # fio_kill 00:17:03.138 13:21:17 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:03.138 13:21:17 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:03.138 13:21:17 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:03.399 13:21:17 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:03.399 13:21:17 -- ftl/trim.sh@20 -- # killprocess 72320 00:17:03.399 13:21:17 -- common/autotest_common.sh@936 -- # '[' -z 72320 ']' 00:17:03.399 13:21:17 -- common/autotest_common.sh@940 -- # kill -0 72320 00:17:03.399 Process with pid 72320 is not found 00:17:03.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72320) - No such process 00:17:03.399 13:21:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72320 is not found' 00:17:03.399 ************************************ 00:17:03.399 END TEST ftl_trim 00:17:03.399 ************************************ 00:17:03.399 00:17:03.399 real 1m12.414s 00:17:03.399 user 1m28.627s 00:17:03.399 sys 0m14.754s 00:17:03.399 13:21:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:03.399 13:21:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.399 13:21:17 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:03.399 13:21:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:03.399 13:21:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.399 13:21:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.399 ************************************ 00:17:03.399 START TEST ftl_restore 00:17:03.399 ************************************ 00:17:03.399 13:21:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:03.399 * Looking for test storage... 00:17:03.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.399 13:21:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:03.399 13:21:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:03.399 13:21:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:03.659 13:21:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:03.659 13:21:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:03.659 13:21:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:03.659 13:21:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:03.659 13:21:18 -- scripts/common.sh@335 -- # IFS=.-: 00:17:03.659 13:21:18 -- scripts/common.sh@335 -- # read -ra ver1 00:17:03.659 13:21:18 -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.659 13:21:18 -- scripts/common.sh@336 -- # read -ra ver2 00:17:03.659 13:21:18 -- scripts/common.sh@337 -- # local 'op=<' 00:17:03.659 13:21:18 -- scripts/common.sh@339 -- # ver1_l=2 00:17:03.659 13:21:18 -- scripts/common.sh@340 -- # ver2_l=1 00:17:03.659 13:21:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:03.659 13:21:18 -- scripts/common.sh@343 -- # case "$op" in 00:17:03.659 13:21:18 -- scripts/common.sh@344 -- # : 1 00:17:03.659 13:21:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:03.659 13:21:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.659 13:21:18 -- scripts/common.sh@364 -- # decimal 1 00:17:03.659 13:21:18 -- scripts/common.sh@352 -- # local d=1 00:17:03.659 13:21:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.659 13:21:18 -- scripts/common.sh@354 -- # echo 1 00:17:03.659 13:21:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:03.659 13:21:18 -- scripts/common.sh@365 -- # decimal 2 00:17:03.659 13:21:18 -- scripts/common.sh@352 -- # local d=2 00:17:03.659 13:21:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.659 13:21:18 -- scripts/common.sh@354 -- # echo 2 00:17:03.659 13:21:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:03.659 13:21:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:03.659 13:21:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:03.659 13:21:18 -- scripts/common.sh@367 -- # return 0 00:17:03.659 13:21:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.659 13:21:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:03.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.659 --rc genhtml_branch_coverage=1 00:17:03.659 --rc genhtml_function_coverage=1 00:17:03.659 --rc genhtml_legend=1 00:17:03.659 --rc geninfo_all_blocks=1 00:17:03.659 --rc geninfo_unexecuted_blocks=1 00:17:03.659 00:17:03.659 ' 00:17:03.659 13:21:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:03.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.659 --rc genhtml_branch_coverage=1 00:17:03.659 --rc genhtml_function_coverage=1 00:17:03.659 --rc genhtml_legend=1 00:17:03.659 --rc geninfo_all_blocks=1 00:17:03.659 --rc geninfo_unexecuted_blocks=1 00:17:03.659 00:17:03.659 ' 00:17:03.659 13:21:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:03.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.659 --rc genhtml_branch_coverage=1 00:17:03.659 --rc genhtml_function_coverage=1 00:17:03.659 --rc genhtml_legend=1 00:17:03.659 --rc geninfo_all_blocks=1 00:17:03.659 --rc geninfo_unexecuted_blocks=1 00:17:03.659 00:17:03.659 ' 00:17:03.659 13:21:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:03.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.659 --rc genhtml_branch_coverage=1 00:17:03.659 --rc genhtml_function_coverage=1 00:17:03.659 --rc genhtml_legend=1 00:17:03.659 --rc geninfo_all_blocks=1 00:17:03.659 --rc geninfo_unexecuted_blocks=1 00:17:03.659 00:17:03.659 ' 00:17:03.659 13:21:18 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:03.659 13:21:18 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:03.659 13:21:18 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.659 13:21:18 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.659 13:21:18 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:03.659 13:21:18 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:03.659 13:21:18 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.659 13:21:18 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:03.659 13:21:18 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:03.659 13:21:18 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.659 13:21:18 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.659 13:21:18 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:03.659 13:21:18 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:03.659 13:21:18 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:03.659 13:21:18 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:03.659 13:21:18 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:03.659 13:21:18 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:03.659 13:21:18 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.659 13:21:18 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.659 13:21:18 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:03.659 13:21:18 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:03.659 13:21:18 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:03.659 13:21:18 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:03.659 13:21:18 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:03.659 13:21:18 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:03.659 13:21:18 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:03.659 13:21:18 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:03.659 13:21:18 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:03.659 13:21:18 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:03.659 13:21:18 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.659 13:21:18 -- ftl/restore.sh@13 -- # mktemp -d 00:17:03.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.659 13:21:18 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IpSDZEQFGG 00:17:03.659 13:21:18 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:03.659 13:21:18 -- ftl/restore.sh@16 -- # case $opt in 00:17:03.659 13:21:18 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:03.659 13:21:18 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:03.659 13:21:18 -- ftl/restore.sh@23 -- # shift 2 00:17:03.659 13:21:18 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:03.659 13:21:18 -- ftl/restore.sh@25 -- # timeout=240 00:17:03.659 13:21:18 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:03.659 13:21:18 -- ftl/restore.sh@39 -- # svcpid=72616 00:17:03.660 13:21:18 -- ftl/restore.sh@41 -- # waitforlisten 72616 00:17:03.660 13:21:18 -- common/autotest_common.sh@829 -- # '[' -z 72616 ']' 00:17:03.660 13:21:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.660 13:21:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.660 13:21:18 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.660 13:21:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.660 13:21:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.660 13:21:18 -- common/autotest_common.sh@10 -- # set +x 00:17:03.660 [2024-12-16 13:21:18.117768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:03.660 [2024-12-16 13:21:18.117881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72616 ] 00:17:03.920 [2024-12-16 13:21:18.269620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.920 [2024-12-16 13:21:18.479215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:03.920 [2024-12-16 13:21:18.479438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.307 13:21:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.307 13:21:19 -- common/autotest_common.sh@862 -- # return 0 00:17:05.307 13:21:19 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:05.307 13:21:19 -- ftl/common.sh@54 -- # local name=nvme0 00:17:05.307 13:21:19 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:05.307 13:21:19 -- ftl/common.sh@56 -- # local size=103424 00:17:05.307 13:21:19 -- ftl/common.sh@59 -- # local base_bdev 00:17:05.307 13:21:19 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:05.569 13:21:19 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:05.569 13:21:19 -- ftl/common.sh@62 -- # local base_size 00:17:05.569 13:21:19 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:05.569 13:21:19 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:05.569 13:21:19 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:05.569 13:21:19 -- common/autotest_common.sh@1369 -- # local bs 00:17:05.569 13:21:19 -- common/autotest_common.sh@1370 -- # local nb 00:17:05.569 13:21:19 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:05.569 13:21:20 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:05.569 { 00:17:05.569 "name": "nvme0n1", 00:17:05.569 "aliases": [ 00:17:05.569 "6fee2644-798b-41cb-acb8-298de57c0f8e" 00:17:05.569 ], 00:17:05.569 "product_name": "NVMe disk", 00:17:05.569 "block_size": 4096, 00:17:05.569 "num_blocks": 1310720, 00:17:05.569 "uuid": "6fee2644-798b-41cb-acb8-298de57c0f8e", 00:17:05.569 "assigned_rate_limits": { 00:17:05.569 "rw_ios_per_sec": 0, 00:17:05.569 "rw_mbytes_per_sec": 0, 00:17:05.569 "r_mbytes_per_sec": 0, 00:17:05.569 "w_mbytes_per_sec": 0 00:17:05.569 }, 00:17:05.569 "claimed": true, 00:17:05.569 "claim_type": "read_many_write_one", 00:17:05.569 "zoned": false, 00:17:05.569 "supported_io_types": { 00:17:05.569 "read": true, 00:17:05.569 "write": true, 00:17:05.569 "unmap": true, 00:17:05.569 "write_zeroes": true, 00:17:05.569 "flush": true, 00:17:05.569 "reset": true, 00:17:05.569 "compare": true, 00:17:05.569 "compare_and_write": false, 00:17:05.569 "abort": true, 00:17:05.569 "nvme_admin": true, 00:17:05.569 "nvme_io": true 00:17:05.569 }, 00:17:05.569 "driver_specific": { 00:17:05.569 "nvme": [ 00:17:05.569 { 00:17:05.569 "pci_address": "0000:00:07.0", 00:17:05.569 "trid": { 00:17:05.569 "trtype": "PCIe", 00:17:05.569 "traddr": "0000:00:07.0" 00:17:05.569 }, 00:17:05.569 "ctrlr_data": { 00:17:05.569 "cntlid": 0, 00:17:05.569 "vendor_id": "0x1b36", 00:17:05.569 "model_number": "QEMU NVMe Ctrl", 00:17:05.569 "serial_number": "12341", 00:17:05.569 "firmware_revision": "8.0.0", 00:17:05.569 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:05.569 "oacs": { 00:17:05.569 "security": 0, 00:17:05.569 "format": 1, 00:17:05.569 "firmware": 0, 00:17:05.569 "ns_manage": 1 00:17:05.569 }, 00:17:05.569 "multi_ctrlr": false, 00:17:05.569 "ana_reporting": false 00:17:05.570 }, 00:17:05.570 "vs": { 00:17:05.570 "nvme_version": "1.4" 00:17:05.570 }, 00:17:05.570 "ns_data": { 00:17:05.570 "id": 1, 00:17:05.570 "can_share": false 00:17:05.570 } 00:17:05.570 } 00:17:05.570 ], 00:17:05.570 "mp_policy": "active_passive" 00:17:05.570 } 00:17:05.570 } 00:17:05.570 ]' 00:17:05.570 13:21:20 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:05.832 13:21:20 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:05.832 13:21:20 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:05.832 13:21:20 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:05.832 13:21:20 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:05.832 13:21:20 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:05.832 13:21:20 -- ftl/common.sh@63 -- # base_size=5120 00:17:05.832 13:21:20 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:05.832 13:21:20 -- ftl/common.sh@67 -- # clear_lvols 00:17:05.832 13:21:20 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:05.832 13:21:20 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:05.832 13:21:20 -- ftl/common.sh@28 -- # stores=a4a99c48-f792-4ecf-a215-4f269cef5c50 00:17:05.832 13:21:20 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:05.832 13:21:20 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4a99c48-f792-4ecf-a215-4f269cef5c50 00:17:06.093 13:21:20 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:06.354 13:21:20 -- ftl/common.sh@68 -- # lvs=8baf27b7-b438-434a-9c96-425185719972 00:17:06.354 13:21:20 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8baf27b7-b438-434a-9c96-425185719972 00:17:06.615 13:21:21 -- ftl/restore.sh@43 -- # split_bdev=60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:06.615 13:21:21 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:06.615 13:21:21 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:06.615 13:21:21 -- ftl/common.sh@35 -- # local name=nvc0 00:17:06.615 13:21:21 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:06.615 13:21:21 -- ftl/common.sh@37 -- # local base_bdev=60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:06.615 13:21:21 -- ftl/common.sh@38 -- # local cache_size= 00:17:06.615 13:21:21 -- ftl/common.sh@41 -- # get_bdev_size 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:06.615 13:21:21 -- common/autotest_common.sh@1367 -- # local bdev_name=60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:06.615 13:21:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:06.615 13:21:21 -- common/autotest_common.sh@1369 -- # local bs 00:17:06.615 13:21:21 -- common/autotest_common.sh@1370 -- # local nb 00:17:06.615 13:21:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:06.874 13:21:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:06.874 { 00:17:06.874 "name": "60d60d78-b1d0-4e9b-96eb-048c3eba4db4", 00:17:06.874 "aliases": [ 00:17:06.874 "lvs/nvme0n1p0" 00:17:06.874 ], 00:17:06.874 "product_name": "Logical Volume", 00:17:06.874 "block_size": 4096, 00:17:06.874 "num_blocks": 26476544, 00:17:06.874 "uuid": "60d60d78-b1d0-4e9b-96eb-048c3eba4db4", 00:17:06.874 "assigned_rate_limits": { 00:17:06.874 "rw_ios_per_sec": 0, 00:17:06.874 "rw_mbytes_per_sec": 0, 00:17:06.874 "r_mbytes_per_sec": 0, 00:17:06.874 "w_mbytes_per_sec": 0 00:17:06.874 }, 00:17:06.874 "claimed": false, 00:17:06.874 "zoned": false, 00:17:06.874 "supported_io_types": { 00:17:06.874 "read": true, 00:17:06.874 "write": true, 00:17:06.875 "unmap": true, 00:17:06.875 "write_zeroes": true, 00:17:06.875 "flush": false, 00:17:06.875 "reset": true, 00:17:06.875 "compare": false, 00:17:06.875 "compare_and_write": false, 00:17:06.875 "abort": false, 00:17:06.875 "nvme_admin": false, 00:17:06.875 "nvme_io": false 00:17:06.875 }, 00:17:06.875 "driver_specific": { 00:17:06.875 "lvol": { 00:17:06.875 "lvol_store_uuid": "8baf27b7-b438-434a-9c96-425185719972", 00:17:06.875 "base_bdev": "nvme0n1", 00:17:06.875 "thin_provision": true, 00:17:06.875 "snapshot": false, 00:17:06.875 "clone": false, 00:17:06.875 "esnap_clone": false 00:17:06.875 } 00:17:06.875 } 00:17:06.875 } 00:17:06.875 ]' 00:17:06.875 13:21:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:06.875 13:21:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:06.875 13:21:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:06.875 13:21:21 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:06.875 13:21:21 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:06.875 13:21:21 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:06.875 13:21:21 -- ftl/common.sh@41 -- # local base_size=5171 00:17:06.875 13:21:21 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:06.875 13:21:21 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:07.136 13:21:21 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:07.136 13:21:21 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:07.136 13:21:21 -- ftl/common.sh@48 -- # get_bdev_size 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:07.136 13:21:21 -- common/autotest_common.sh@1367 -- # local bdev_name=60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:07.136 13:21:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:07.136 13:21:21 -- common/autotest_common.sh@1369 -- # local bs 00:17:07.136 13:21:21 -- common/autotest_common.sh@1370 -- # local nb 00:17:07.136 13:21:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:07.397 13:21:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:07.397 { 00:17:07.397 "name": "60d60d78-b1d0-4e9b-96eb-048c3eba4db4", 00:17:07.397 "aliases": [ 00:17:07.397 "lvs/nvme0n1p0" 00:17:07.397 ], 00:17:07.397 "product_name": "Logical Volume", 00:17:07.397 "block_size": 4096, 00:17:07.397 "num_blocks": 26476544, 00:17:07.397 "uuid": "60d60d78-b1d0-4e9b-96eb-048c3eba4db4", 00:17:07.397 "assigned_rate_limits": { 00:17:07.397 "rw_ios_per_sec": 0, 00:17:07.397 "rw_mbytes_per_sec": 0, 00:17:07.397 "r_mbytes_per_sec": 0, 00:17:07.397 "w_mbytes_per_sec": 0 00:17:07.397 }, 00:17:07.397 "claimed": false, 00:17:07.397 "zoned": false, 00:17:07.397 "supported_io_types": { 00:17:07.397 "read": true, 00:17:07.397 "write": true, 00:17:07.397 "unmap": true, 00:17:07.397 "write_zeroes": true, 00:17:07.397 "flush": false, 00:17:07.397 "reset": true, 00:17:07.397 "compare": false, 00:17:07.397 "compare_and_write": false, 00:17:07.397 "abort": false, 00:17:07.397 "nvme_admin": false, 00:17:07.397 "nvme_io": false 00:17:07.397 }, 00:17:07.397 "driver_specific": { 00:17:07.397 "lvol": { 00:17:07.397 "lvol_store_uuid": "8baf27b7-b438-434a-9c96-425185719972", 00:17:07.397 "base_bdev": "nvme0n1", 00:17:07.398 "thin_provision": true, 00:17:07.398 "snapshot": false, 00:17:07.398 "clone": false, 00:17:07.398 "esnap_clone": false 00:17:07.398 } 00:17:07.398 } 00:17:07.398 } 00:17:07.398 ]' 00:17:07.398 13:21:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:07.398 13:21:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:07.398 13:21:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:07.398 13:21:21 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:07.398 13:21:21 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:07.398 13:21:21 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:07.398 13:21:21 -- ftl/common.sh@48 -- # cache_size=5171 00:17:07.398 13:21:21 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:07.659 13:21:22 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:07.659 13:21:22 -- ftl/restore.sh@48 -- # get_bdev_size 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:07.659 13:21:22 -- common/autotest_common.sh@1367 -- # local bdev_name=60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:07.659 13:21:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:07.659 13:21:22 -- common/autotest_common.sh@1369 -- # local bs 00:17:07.659 13:21:22 -- common/autotest_common.sh@1370 -- # local nb 00:17:07.659 13:21:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 00:17:07.920 13:21:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:07.920 { 00:17:07.920 "name": "60d60d78-b1d0-4e9b-96eb-048c3eba4db4", 00:17:07.920 "aliases": [ 00:17:07.920 "lvs/nvme0n1p0" 00:17:07.920 ], 00:17:07.920 "product_name": "Logical Volume", 00:17:07.920 "block_size": 4096, 00:17:07.920 "num_blocks": 26476544, 00:17:07.920 "uuid": "60d60d78-b1d0-4e9b-96eb-048c3eba4db4", 00:17:07.920 "assigned_rate_limits": { 00:17:07.920 "rw_ios_per_sec": 0, 00:17:07.920 "rw_mbytes_per_sec": 0, 00:17:07.920 "r_mbytes_per_sec": 0, 00:17:07.920 "w_mbytes_per_sec": 0 00:17:07.920 }, 00:17:07.920 "claimed": false, 00:17:07.920 "zoned": false, 00:17:07.920 "supported_io_types": { 00:17:07.920 "read": true, 00:17:07.920 "write": true, 00:17:07.920 "unmap": true, 00:17:07.920 "write_zeroes": true, 00:17:07.920 "flush": false, 00:17:07.920 "reset": true, 00:17:07.920 "compare": false, 00:17:07.920 "compare_and_write": false, 00:17:07.920 "abort": false, 00:17:07.920 "nvme_admin": false, 00:17:07.920 "nvme_io": false 00:17:07.920 }, 00:17:07.920 "driver_specific": { 00:17:07.920 "lvol": { 00:17:07.920 "lvol_store_uuid": "8baf27b7-b438-434a-9c96-425185719972", 00:17:07.920 "base_bdev": "nvme0n1", 00:17:07.920 "thin_provision": true, 00:17:07.920 "snapshot": false, 00:17:07.920 "clone": false, 00:17:07.920 "esnap_clone": false 00:17:07.920 } 00:17:07.920 } 00:17:07.920 } 00:17:07.920 ]' 00:17:07.920 13:21:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:07.920 13:21:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:07.920 13:21:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:07.920 13:21:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:07.920 13:21:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:07.920 13:21:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:07.920 13:21:22 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:07.921 13:21:22 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 --l2p_dram_limit 10' 00:17:07.921 13:21:22 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:07.921 13:21:22 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:07.921 13:21:22 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:07.921 13:21:22 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:07.921 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:07.921 13:21:22 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 60d60d78-b1d0-4e9b-96eb-048c3eba4db4 --l2p_dram_limit 10 -c nvc0n1p0 00:17:07.921 [2024-12-16 13:21:22.491091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-16 13:21:22.491168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:07.921 [2024-12-16 13:21:22.491189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:07.921 [2024-12-16 13:21:22.491200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-16 13:21:22.491273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-16 13:21:22.491285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.921 [2024-12-16 13:21:22.491296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:07.921 [2024-12-16 13:21:22.491306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-16 13:21:22.491330] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:07.921 [2024-12-16 13:21:22.492164] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:07.921 [2024-12-16 13:21:22.492212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-16 13:21:22.492222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.921 [2024-12-16 13:21:22.492234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:17:07.921 [2024-12-16 13:21:22.492242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-16 13:21:22.492329] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3 00:17:08.182 [2024-12-16 13:21:22.494732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.494784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:08.182 [2024-12-16 13:21:22.494796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:08.182 [2024-12-16 13:21:22.494807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.507661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.507710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.182 [2024-12-16 13:21:22.507722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:17:08.182 [2024-12-16 13:21:22.507733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.507837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.507849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.182 [2024-12-16 13:21:22.507858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:08.182 [2024-12-16 13:21:22.507875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.507932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.507946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:08.182 [2024-12-16 13:21:22.507956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:08.182 [2024-12-16 13:21:22.507967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.507993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.182 [2024-12-16 13:21:22.513134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.513324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.182 [2024-12-16 13:21:22.513349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.146 ms 00:17:08.182 [2024-12-16 13:21:22.513359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.513414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.513425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:08.182 [2024-12-16 13:21:22.513436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:08.182 [2024-12-16 13:21:22.513444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.513482] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:08.182 [2024-12-16 13:21:22.513614] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:08.182 [2024-12-16 13:21:22.513659] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:08.182 [2024-12-16 13:21:22.513672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:08.182 [2024-12-16 13:21:22.513687] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:08.182 [2024-12-16 13:21:22.513697] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:08.182 [2024-12-16 13:21:22.513713] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:08.182 [2024-12-16 13:21:22.513733] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:08.182 [2024-12-16 13:21:22.513743] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:08.182 [2024-12-16 13:21:22.513751] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:08.182 [2024-12-16 13:21:22.513762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.513772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:08.182 [2024-12-16 13:21:22.513782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:08.182 [2024-12-16 13:21:22.513791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.513860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.182 [2024-12-16 13:21:22.513869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:08.182 [2024-12-16 13:21:22.513880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:08.182 [2024-12-16 13:21:22.513890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.182 [2024-12-16 13:21:22.513971] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:08.182 [2024-12-16 13:21:22.513981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:08.182 [2024-12-16 13:21:22.513992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:08.183 [2024-12-16 13:21:22.514019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:08.183 [2024-12-16 13:21:22.514045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.183 [2024-12-16 13:21:22.514062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:08.183 [2024-12-16 13:21:22.514068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:08.183 [2024-12-16 13:21:22.514080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.183 [2024-12-16 13:21:22.514087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:08.183 [2024-12-16 13:21:22.514096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:08.183 [2024-12-16 13:21:22.514102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:08.183 [2024-12-16 13:21:22.514120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:08.183 [2024-12-16 13:21:22.514129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:08.183 [2024-12-16 13:21:22.514145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:08.183 [2024-12-16 13:21:22.514151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:08.183 [2024-12-16 13:21:22.514167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:08.183 [2024-12-16 13:21:22.514192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:08.183 [2024-12-16 13:21:22.514213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:08.183 [2024-12-16 13:21:22.514241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:08.183 [2024-12-16 13:21:22.514264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.183 [2024-12-16 13:21:22.514281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:08.183 [2024-12-16 13:21:22.514291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:08.183 [2024-12-16 13:21:22.514298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.183 [2024-12-16 13:21:22.514307] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:08.183 [2024-12-16 13:21:22.514316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:08.183 [2024-12-16 13:21:22.514326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.183 [2024-12-16 13:21:22.514347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:08.183 [2024-12-16 13:21:22.514354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:08.183 [2024-12-16 13:21:22.514364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:08.183 [2024-12-16 13:21:22.514371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:08.183 [2024-12-16 13:21:22.514383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:08.183 [2024-12-16 13:21:22.514391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:08.183 [2024-12-16 13:21:22.514402] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:08.183 [2024-12-16 13:21:22.514416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.183 [2024-12-16 13:21:22.514427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:08.183 [2024-12-16 13:21:22.514435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:08.183 [2024-12-16 13:21:22.514444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:08.183 [2024-12-16 13:21:22.514451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:08.183 [2024-12-16 13:21:22.514460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:08.183 [2024-12-16 13:21:22.514467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:08.183 [2024-12-16 13:21:22.514476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:08.183 [2024-12-16 13:21:22.514483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:08.183 [2024-12-16 13:21:22.514494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:08.183 [2024-12-16 13:21:22.514508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:08.183 [2024-12-16 13:21:22.514517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:08.183 [2024-12-16 13:21:22.514524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:08.183 [2024-12-16 13:21:22.514538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:08.183 [2024-12-16 13:21:22.514545] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:08.183 [2024-12-16 13:21:22.514556] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.183 [2024-12-16 13:21:22.514567] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:08.183 [2024-12-16 13:21:22.514578] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:08.183 [2024-12-16 13:21:22.514585] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:08.183 [2024-12-16 13:21:22.514595] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:08.183 [2024-12-16 13:21:22.514603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.183 [2024-12-16 13:21:22.514613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:08.183 [2024-12-16 13:21:22.514621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:17:08.183 [2024-12-16 13:21:22.514646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.183 [2024-12-16 13:21:22.535771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.183 [2024-12-16 13:21:22.535821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.183 [2024-12-16 13:21:22.535833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.079 ms 00:17:08.183 [2024-12-16 13:21:22.535844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.183 [2024-12-16 13:21:22.535939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.535952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.184 [2024-12-16 13:21:22.535963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:08.184 [2024-12-16 13:21:22.535974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.573246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.573286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.184 [2024-12-16 13:21:22.573296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.225 ms 00:17:08.184 [2024-12-16 13:21:22.573306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.573339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.573349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.184 [2024-12-16 13:21:22.573357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:08.184 [2024-12-16 13:21:22.573368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.573916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.573946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.184 [2024-12-16 13:21:22.573956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:17:08.184 [2024-12-16 13:21:22.573966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.574087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.574100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.184 [2024-12-16 13:21:22.574108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:08.184 [2024-12-16 13:21:22.574118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.591781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.591816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.184 [2024-12-16 13:21:22.591826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.644 ms 00:17:08.184 [2024-12-16 13:21:22.591835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.604651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:08.184 [2024-12-16 13:21:22.608278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.608310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:08.184 [2024-12-16 13:21:22.608322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.366 ms 00:17:08.184 [2024-12-16 13:21:22.608330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.692033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.184 [2024-12-16 13:21:22.692090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:08.184 [2024-12-16 13:21:22.692107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.672 ms 00:17:08.184 [2024-12-16 13:21:22.692117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.184 [2024-12-16 13:21:22.692168] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:08.184 [2024-12-16 13:21:22.692182] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:12.387 [2024-12-16 13:21:26.505203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.505579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:12.387 [2024-12-16 13:21:26.505618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3813.002 ms 00:17:12.387 [2024-12-16 13:21:26.505655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.505943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.505958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:12.387 [2024-12-16 13:21:26.505976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:17:12.387 [2024-12-16 13:21:26.505986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.532868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.532920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:12.387 [2024-12-16 13:21:26.532939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.815 ms 00:17:12.387 [2024-12-16 13:21:26.532949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.558677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.558723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:12.387 [2024-12-16 13:21:26.558745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.672 ms 00:17:12.387 [2024-12-16 13:21:26.558753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.559122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.559133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:12.387 [2024-12-16 13:21:26.559145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:12.387 [2024-12-16 13:21:26.559154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.634032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.634080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:12.387 [2024-12-16 13:21:26.634098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.818 ms 00:17:12.387 [2024-12-16 13:21:26.634107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.662744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.662794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:12.387 [2024-12-16 13:21:26.662810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.580 ms 00:17:12.387 [2024-12-16 13:21:26.662819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.664986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.665032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:12.387 [2024-12-16 13:21:26.665049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:17:12.387 [2024-12-16 13:21:26.665058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.691944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.691991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:12.387 [2024-12-16 13:21:26.692006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.824 ms 00:17:12.387 [2024-12-16 13:21:26.692014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.692075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.692085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:12.387 [2024-12-16 13:21:26.692097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:12.387 [2024-12-16 13:21:26.692105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.692226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.387 [2024-12-16 13:21:26.692238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:12.387 [2024-12-16 13:21:26.692249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:12.387 [2024-12-16 13:21:26.692256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.387 [2024-12-16 13:21:26.693714] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4202.009 ms, result 0 00:17:12.387 { 00:17:12.387 "name": "ftl0", 00:17:12.387 "uuid": "3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3" 00:17:12.387 } 00:17:12.387 13:21:26 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:12.387 13:21:26 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:12.387 13:21:26 -- ftl/restore.sh@63 -- # echo ']}' 00:17:12.387 13:21:26 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:12.650 [2024-12-16 13:21:27.096662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.650 [2024-12-16 13:21:27.096704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.650 [2024-12-16 13:21:27.096716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:12.650 [2024-12-16 13:21:27.096731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.650 [2024-12-16 13:21:27.096751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.650 [2024-12-16 13:21:27.099000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.650 [2024-12-16 13:21:27.099117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.650 [2024-12-16 13:21:27.099134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:17:12.650 [2024-12-16 13:21:27.099147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.650 [2024-12-16 13:21:27.099357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.650 [2024-12-16 13:21:27.099369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.650 [2024-12-16 13:21:27.099378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:17:12.650 [2024-12-16 13:21:27.099385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.650 [2024-12-16 13:21:27.101828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.650 [2024-12-16 13:21:27.101846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.650 [2024-12-16 13:21:27.101855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.429 ms 00:17:12.650 [2024-12-16 13:21:27.101861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.650 [2024-12-16 13:21:27.106485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.650 [2024-12-16 13:21:27.106509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:12.650 [2024-12-16 13:21:27.106519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.606 ms 00:17:12.650 [2024-12-16 13:21:27.106526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.650 [2024-12-16 13:21:27.126013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.650 [2024-12-16 13:21:27.126039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.650 [2024-12-16 13:21:27.126049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.428 ms 00:17:12.651 [2024-12-16 13:21:27.126056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.139538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.651 [2024-12-16 13:21:27.139565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.651 [2024-12-16 13:21:27.139576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.450 ms 00:17:12.651 [2024-12-16 13:21:27.139582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.139710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.651 [2024-12-16 13:21:27.139719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.651 [2024-12-16 13:21:27.139728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:12.651 [2024-12-16 13:21:27.139736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.158477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.651 [2024-12-16 13:21:27.158500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:12.651 [2024-12-16 13:21:27.158510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.722 ms 00:17:12.651 [2024-12-16 13:21:27.158515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.176779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.651 [2024-12-16 13:21:27.176873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:12.651 [2024-12-16 13:21:27.176888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.233 ms 00:17:12.651 [2024-12-16 13:21:27.176894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.194417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.651 [2024-12-16 13:21:27.194440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.651 [2024-12-16 13:21:27.194450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.495 ms 00:17:12.651 [2024-12-16 13:21:27.194455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.211999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.651 [2024-12-16 13:21:27.212080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.651 [2024-12-16 13:21:27.212095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.485 ms 00:17:12.651 [2024-12-16 13:21:27.212101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.651 [2024-12-16 13:21:27.212128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.651 [2024-12-16 13:21:27.212143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.651 [2024-12-16 13:21:27.212622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.652 [2024-12-16 13:21:27.212861] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.652 [2024-12-16 13:21:27.212869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3 00:17:12.652 [2024-12-16 13:21:27.212876] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.652 [2024-12-16 13:21:27.212883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.652 [2024-12-16 13:21:27.212888] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.652 [2024-12-16 13:21:27.212896] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.652 [2024-12-16 13:21:27.212901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.652 [2024-12-16 13:21:27.212909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.652 [2024-12-16 13:21:27.212914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.652 [2024-12-16 13:21:27.212921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.652 [2024-12-16 13:21:27.212926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.652 [2024-12-16 13:21:27.212935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.652 [2024-12-16 13:21:27.212941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.652 [2024-12-16 13:21:27.212951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:17:12.652 [2024-12-16 13:21:27.212956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.222965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.914 [2024-12-16 13:21:27.223057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.914 [2024-12-16 13:21:27.223072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.983 ms 00:17:12.914 [2024-12-16 13:21:27.223078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.223236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.914 [2024-12-16 13:21:27.223249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.914 [2024-12-16 13:21:27.223257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:12.914 [2024-12-16 13:21:27.223262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.260149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.260176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.914 [2024-12-16 13:21:27.260187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.260193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.260246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.260254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.914 [2024-12-16 13:21:27.260262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.260268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.260325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.260333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.914 [2024-12-16 13:21:27.260341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.260346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.260361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.260367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.914 [2024-12-16 13:21:27.260376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.260382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.322816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.322850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.914 [2024-12-16 13:21:27.322861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.322868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.914 [2024-12-16 13:21:27.347100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.914 [2024-12-16 13:21:27.347180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.914 [2024-12-16 13:21:27.347241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.914 [2024-12-16 13:21:27.347341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.914 [2024-12-16 13:21:27.347395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.914 [2024-12-16 13:21:27.347455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.914 [2024-12-16 13:21:27.347512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.914 [2024-12-16 13:21:27.347521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.914 [2024-12-16 13:21:27.347528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.914 [2024-12-16 13:21:27.347669] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 250.948 ms, result 0 00:17:12.914 true 00:17:12.914 13:21:27 -- ftl/restore.sh@66 -- # killprocess 72616 00:17:12.914 13:21:27 -- common/autotest_common.sh@936 -- # '[' -z 72616 ']' 00:17:12.914 13:21:27 -- common/autotest_common.sh@940 -- # kill -0 72616 00:17:12.914 13:21:27 -- common/autotest_common.sh@941 -- # uname 00:17:12.914 13:21:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.914 13:21:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72616 00:17:12.914 killing process with pid 72616 00:17:12.914 13:21:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:12.914 13:21:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:12.914 13:21:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72616' 00:17:12.914 13:21:27 -- common/autotest_common.sh@955 -- # kill 72616 00:17:12.914 13:21:27 -- common/autotest_common.sh@960 -- # wait 72616 00:17:19.545 13:21:33 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:22.848 262144+0 records in 00:17:22.848 262144+0 records out 00:17:22.848 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.63863 s, 295 MB/s 00:17:22.848 13:21:37 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:24.761 13:21:39 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.761 [2024-12-16 13:21:39.107492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.761 [2024-12-16 13:21:39.107789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72874 ] 00:17:24.761 [2024-12-16 13:21:39.255258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.022 [2024-12-16 13:21:39.521910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.283 [2024-12-16 13:21:39.849868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.283 [2024-12-16 13:21:39.849966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.545 [2024-12-16 13:21:40.006324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.545 [2024-12-16 13:21:40.006393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.545 [2024-12-16 13:21:40.006411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:25.545 [2024-12-16 13:21:40.006423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.545 [2024-12-16 13:21:40.006481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.545 [2024-12-16 13:21:40.006492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.545 [2024-12-16 13:21:40.006501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:25.545 [2024-12-16 13:21:40.006509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.545 [2024-12-16 13:21:40.006530] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.545 [2024-12-16 13:21:40.007497] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.545 [2024-12-16 13:21:40.007554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.545 [2024-12-16 13:21:40.007564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.545 [2024-12-16 13:21:40.007573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:17:25.545 [2024-12-16 13:21:40.007582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.545 [2024-12-16 13:21:40.010088] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.545 [2024-12-16 13:21:40.025769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.545 [2024-12-16 13:21:40.025826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.545 [2024-12-16 13:21:40.025840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.684 ms 00:17:25.545 [2024-12-16 13:21:40.025848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.025936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.025947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.546 [2024-12-16 13:21:40.025956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:25.546 [2024-12-16 13:21:40.025965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.038115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.038393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.546 [2024-12-16 13:21:40.038415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.064 ms 00:17:25.546 [2024-12-16 13:21:40.038425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.038546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.038558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.546 [2024-12-16 13:21:40.038568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:25.546 [2024-12-16 13:21:40.038577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.038685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.038697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.546 [2024-12-16 13:21:40.038707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:25.546 [2024-12-16 13:21:40.038715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.038754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.546 [2024-12-16 13:21:40.043736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.043781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.546 [2024-12-16 13:21:40.043793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.001 ms 00:17:25.546 [2024-12-16 13:21:40.043803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.043856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.043865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.546 [2024-12-16 13:21:40.043875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:25.546 [2024-12-16 13:21:40.043888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.043933] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.546 [2024-12-16 13:21:40.043960] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:25.546 [2024-12-16 13:21:40.044000] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.546 [2024-12-16 13:21:40.044020] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:25.546 [2024-12-16 13:21:40.044105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:25.546 [2024-12-16 13:21:40.044118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.546 [2024-12-16 13:21:40.044134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:25.546 [2024-12-16 13:21:40.044146] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044157] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044167] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:25.546 [2024-12-16 13:21:40.044176] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.546 [2024-12-16 13:21:40.044185] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:25.546 [2024-12-16 13:21:40.044193] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:25.546 [2024-12-16 13:21:40.044202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.044210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.546 [2024-12-16 13:21:40.044219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:25.546 [2024-12-16 13:21:40.044226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.044292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.546 [2024-12-16 13:21:40.044300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.546 [2024-12-16 13:21:40.044309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:25.546 [2024-12-16 13:21:40.044317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.546 [2024-12-16 13:21:40.044397] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.546 [2024-12-16 13:21:40.044407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.546 [2024-12-16 13:21:40.044416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.546 [2024-12-16 13:21:40.044439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.546 [2024-12-16 13:21:40.044462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.546 [2024-12-16 13:21:40.044477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.546 [2024-12-16 13:21:40.044484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:25.546 [2024-12-16 13:21:40.044491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.546 [2024-12-16 13:21:40.044497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.546 [2024-12-16 13:21:40.044504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:25.546 [2024-12-16 13:21:40.044516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.546 [2024-12-16 13:21:40.044541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:25.546 [2024-12-16 13:21:40.044547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044554] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:25.546 [2024-12-16 13:21:40.044561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:25.546 [2024-12-16 13:21:40.044568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.546 [2024-12-16 13:21:40.044582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.546 [2024-12-16 13:21:40.044602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.546 [2024-12-16 13:21:40.044621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.546 [2024-12-16 13:21:40.044671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.546 [2024-12-16 13:21:40.044692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.546 [2024-12-16 13:21:40.044706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.546 [2024-12-16 13:21:40.044713] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:25.546 [2024-12-16 13:21:40.044719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.546 [2024-12-16 13:21:40.044725] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.546 [2024-12-16 13:21:40.044737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.546 [2024-12-16 13:21:40.044746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.546 [2024-12-16 13:21:40.044763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.546 [2024-12-16 13:21:40.044770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.546 [2024-12-16 13:21:40.044777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.546 [2024-12-16 13:21:40.044788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.546 [2024-12-16 13:21:40.044796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.546 [2024-12-16 13:21:40.044847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.546 [2024-12-16 13:21:40.044857] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.546 [2024-12-16 13:21:40.044868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.546 [2024-12-16 13:21:40.044878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:25.546 [2024-12-16 13:21:40.044885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:25.546 [2024-12-16 13:21:40.044893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:25.546 [2024-12-16 13:21:40.044901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:25.547 [2024-12-16 13:21:40.044909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:25.547 [2024-12-16 13:21:40.044917] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:25.547 [2024-12-16 13:21:40.044924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:25.547 [2024-12-16 13:21:40.044931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:25.547 [2024-12-16 13:21:40.044939] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:25.547 [2024-12-16 13:21:40.044947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:25.547 [2024-12-16 13:21:40.044955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:25.547 [2024-12-16 13:21:40.044963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:25.547 [2024-12-16 13:21:40.044971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:25.547 [2024-12-16 13:21:40.044979] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.547 [2024-12-16 13:21:40.044988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.547 [2024-12-16 13:21:40.044997] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.547 [2024-12-16 13:21:40.045005] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.547 [2024-12-16 13:21:40.045013] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.547 [2024-12-16 13:21:40.045022] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.547 [2024-12-16 13:21:40.045031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.547 [2024-12-16 13:21:40.045039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.547 [2024-12-16 13:21:40.045047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:17:25.547 [2024-12-16 13:21:40.045055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.547 [2024-12-16 13:21:40.067233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.547 [2024-12-16 13:21:40.067289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.547 [2024-12-16 13:21:40.067302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.130 ms 00:17:25.547 [2024-12-16 13:21:40.067318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.547 [2024-12-16 13:21:40.067423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.547 [2024-12-16 13:21:40.067432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.547 [2024-12-16 13:21:40.067441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:25.547 [2024-12-16 13:21:40.067449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.119987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.120049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.809 [2024-12-16 13:21:40.120063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.479 ms 00:17:25.809 [2024-12-16 13:21:40.120072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.120129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.120139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.809 [2024-12-16 13:21:40.120149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:25.809 [2024-12-16 13:21:40.120157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.121031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.121057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.809 [2024-12-16 13:21:40.121069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:17:25.809 [2024-12-16 13:21:40.121085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.121236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.121249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.809 [2024-12-16 13:21:40.121260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:25.809 [2024-12-16 13:21:40.121269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.140782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.140862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.809 [2024-12-16 13:21:40.140875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.483 ms 00:17:25.809 [2024-12-16 13:21:40.140885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.156659] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:25.809 [2024-12-16 13:21:40.156714] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.809 [2024-12-16 13:21:40.156728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.156738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.809 [2024-12-16 13:21:40.156749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.719 ms 00:17:25.809 [2024-12-16 13:21:40.156757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.183811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.184013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:25.809 [2024-12-16 13:21:40.184037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.981 ms 00:17:25.809 [2024-12-16 13:21:40.184046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.197438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.197489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:25.809 [2024-12-16 13:21:40.197503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.340 ms 00:17:25.809 [2024-12-16 13:21:40.197510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.210752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.210953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:25.809 [2024-12-16 13:21:40.210989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.191 ms 00:17:25.809 [2024-12-16 13:21:40.210998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.211506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.211544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.809 [2024-12-16 13:21:40.211555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:25.809 [2024-12-16 13:21:40.211564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.285700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.285920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:25.809 [2024-12-16 13:21:40.285946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.115 ms 00:17:25.809 [2024-12-16 13:21:40.285957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.298013] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:25.809 [2024-12-16 13:21:40.302239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.302284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.809 [2024-12-16 13:21:40.302297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.230 ms 00:17:25.809 [2024-12-16 13:21:40.302307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.302407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.302419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:25.809 [2024-12-16 13:21:40.302430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:25.809 [2024-12-16 13:21:40.302438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.302512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.302523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.809 [2024-12-16 13:21:40.302533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:25.809 [2024-12-16 13:21:40.302541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.304096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.304146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:25.809 [2024-12-16 13:21:40.304157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:17:25.809 [2024-12-16 13:21:40.304165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.304207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.304217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.809 [2024-12-16 13:21:40.304226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:25.809 [2024-12-16 13:21:40.304241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.304285] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:25.809 [2024-12-16 13:21:40.304296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.304303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:25.809 [2024-12-16 13:21:40.304314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:25.809 [2024-12-16 13:21:40.304323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.809 [2024-12-16 13:21:40.332387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.809 [2024-12-16 13:21:40.332592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.809 [2024-12-16 13:21:40.332616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.042 ms 00:17:25.810 [2024-12-16 13:21:40.332647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.810 [2024-12-16 13:21:40.332741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.810 [2024-12-16 13:21:40.332760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.810 [2024-12-16 13:21:40.332771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:25.810 [2024-12-16 13:21:40.332780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.810 [2024-12-16 13:21:40.334448] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.551 ms, result 0 00:17:27.195  [2024-12-16T13:21:42.710Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-16T13:21:43.656Z] Copying: 42/1024 [MB] (19 MBps) [2024-12-16T13:21:44.599Z] Copying: 52/1024 [MB] (10 MBps) [2024-12-16T13:21:45.542Z] Copying: 63/1024 [MB] (10 MBps) [2024-12-16T13:21:46.485Z] Copying: 75/1024 [MB] (11 MBps) [2024-12-16T13:21:47.429Z] Copying: 85/1024 [MB] (10 MBps) [2024-12-16T13:21:48.372Z] Copying: 101/1024 [MB] (15 MBps) [2024-12-16T13:21:49.759Z] Copying: 122/1024 [MB] (21 MBps) [2024-12-16T13:21:50.703Z] Copying: 138/1024 [MB] (15 MBps) [2024-12-16T13:21:51.644Z] Copying: 150/1024 [MB] (12 MBps) [2024-12-16T13:21:52.588Z] Copying: 167/1024 [MB] (17 MBps) [2024-12-16T13:21:53.532Z] Copying: 187/1024 [MB] (19 MBps) [2024-12-16T13:21:54.482Z] Copying: 212/1024 [MB] (24 MBps) [2024-12-16T13:21:55.432Z] Copying: 226/1024 [MB] (14 MBps) [2024-12-16T13:21:56.425Z] Copying: 237/1024 [MB] (10 MBps) [2024-12-16T13:21:57.394Z] Copying: 251/1024 [MB] (14 MBps) [2024-12-16T13:21:58.782Z] Copying: 271/1024 [MB] (20 MBps) [2024-12-16T13:21:59.354Z] Copying: 287/1024 [MB] (15 MBps) [2024-12-16T13:22:00.742Z] Copying: 303/1024 [MB] (15 MBps) [2024-12-16T13:22:01.680Z] Copying: 315/1024 [MB] (12 MBps) [2024-12-16T13:22:02.620Z] Copying: 331/1024 [MB] (15 MBps) [2024-12-16T13:22:03.563Z] Copying: 363/1024 [MB] (32 MBps) [2024-12-16T13:22:04.507Z] Copying: 381/1024 [MB] (17 MBps) [2024-12-16T13:22:05.450Z] Copying: 402/1024 [MB] (20 MBps) [2024-12-16T13:22:06.391Z] Copying: 418/1024 [MB] (16 MBps) [2024-12-16T13:22:07.777Z] Copying: 438/1024 [MB] (20 MBps) [2024-12-16T13:22:08.349Z] Copying: 456/1024 [MB] (17 MBps) [2024-12-16T13:22:09.736Z] Copying: 474/1024 [MB] (18 MBps) [2024-12-16T13:22:10.680Z] Copying: 493/1024 [MB] (18 MBps) [2024-12-16T13:22:11.625Z] Copying: 509/1024 [MB] (15 MBps) [2024-12-16T13:22:12.569Z] Copying: 527/1024 [MB] (17 MBps) [2024-12-16T13:22:13.513Z] Copying: 544/1024 [MB] (17 MBps) [2024-12-16T13:22:14.457Z] Copying: 558/1024 [MB] (13 MBps) [2024-12-16T13:22:15.399Z] Copying: 571/1024 [MB] (12 MBps) [2024-12-16T13:22:16.785Z] Copying: 585/1024 [MB] (14 MBps) [2024-12-16T13:22:17.357Z] Copying: 596/1024 [MB] (10 MBps) [2024-12-16T13:22:18.773Z] Copying: 606/1024 [MB] (10 MBps) [2024-12-16T13:22:19.352Z] Copying: 634/1024 [MB] (28 MBps) [2024-12-16T13:22:20.737Z] Copying: 671/1024 [MB] (36 MBps) [2024-12-16T13:22:21.681Z] Copying: 694/1024 [MB] (23 MBps) [2024-12-16T13:22:22.624Z] Copying: 724/1024 [MB] (30 MBps) [2024-12-16T13:22:23.568Z] Copying: 760/1024 [MB] (35 MBps) [2024-12-16T13:22:24.512Z] Copying: 775/1024 [MB] (15 MBps) [2024-12-16T13:22:25.456Z] Copying: 788/1024 [MB] (13 MBps) [2024-12-16T13:22:26.399Z] Copying: 809/1024 [MB] (21 MBps) [2024-12-16T13:22:27.785Z] Copying: 832/1024 [MB] (22 MBps) [2024-12-16T13:22:28.357Z] Copying: 849/1024 [MB] (16 MBps) [2024-12-16T13:22:29.745Z] Copying: 865/1024 [MB] (15 MBps) [2024-12-16T13:22:30.689Z] Copying: 882/1024 [MB] (17 MBps) [2024-12-16T13:22:31.633Z] Copying: 908/1024 [MB] (25 MBps) [2024-12-16T13:22:32.577Z] Copying: 941/1024 [MB] (33 MBps) [2024-12-16T13:22:33.522Z] Copying: 958/1024 [MB] (16 MBps) [2024-12-16T13:22:34.466Z] Copying: 970/1024 [MB] (11 MBps) [2024-12-16T13:22:35.411Z] Copying: 987/1024 [MB] (16 MBps) [2024-12-16T13:22:36.355Z] Copying: 997/1024 [MB] (10 MBps) [2024-12-16T13:22:37.744Z] Copying: 1011/1024 [MB] (13 MBps) [2024-12-16T13:22:37.744Z] Copying: 1022/1024 [MB] (10 MBps) [2024-12-16T13:22:37.744Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 13:22:37.510059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.510129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:23.170 [2024-12-16 13:22:37.510148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:23.170 [2024-12-16 13:22:37.510157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.510180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:23.170 [2024-12-16 13:22:37.513389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.513641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:23.170 [2024-12-16 13:22:37.513678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:18:23.170 [2024-12-16 13:22:37.513688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.516871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.516917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:23.170 [2024-12-16 13:22:37.516929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.151 ms 00:18:23.170 [2024-12-16 13:22:37.516938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.536428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.536615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:23.170 [2024-12-16 13:22:37.536651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.471 ms 00:18:23.170 [2024-12-16 13:22:37.536669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.542843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.543013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:23.170 [2024-12-16 13:22:37.543035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:18:23.170 [2024-12-16 13:22:37.543045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.570589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.570794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:23.170 [2024-12-16 13:22:37.570817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.468 ms 00:18:23.170 [2024-12-16 13:22:37.570826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.587544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.587591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:23.170 [2024-12-16 13:22:37.587607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.678 ms 00:18:23.170 [2024-12-16 13:22:37.587616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.587800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.587815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:23.170 [2024-12-16 13:22:37.587827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:23.170 [2024-12-16 13:22:37.587837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.614233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.614411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:23.170 [2024-12-16 13:22:37.614431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.380 ms 00:18:23.170 [2024-12-16 13:22:37.614439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.640483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.640530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:23.170 [2024-12-16 13:22:37.640543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.006 ms 00:18:23.170 [2024-12-16 13:22:37.640564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.665478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.170 [2024-12-16 13:22:37.665525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:23.170 [2024-12-16 13:22:37.665537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.872 ms 00:18:23.170 [2024-12-16 13:22:37.665545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.170 [2024-12-16 13:22:37.690349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.171 [2024-12-16 13:22:37.690394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:23.171 [2024-12-16 13:22:37.690406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.684 ms 00:18:23.171 [2024-12-16 13:22:37.690414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.171 [2024-12-16 13:22:37.690457] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:23.171 [2024-12-16 13:22:37.690476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.690985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:23.171 [2024-12-16 13:22:37.691215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:23.172 [2024-12-16 13:22:37.691326] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:23.172 [2024-12-16 13:22:37.691336] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3 00:18:23.172 [2024-12-16 13:22:37.691344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:23.172 [2024-12-16 13:22:37.691352] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:23.172 [2024-12-16 13:22:37.691360] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:23.172 [2024-12-16 13:22:37.691368] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:23.172 [2024-12-16 13:22:37.691378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:23.172 [2024-12-16 13:22:37.691388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:23.172 [2024-12-16 13:22:37.691395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:23.172 [2024-12-16 13:22:37.691402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:23.172 [2024-12-16 13:22:37.691421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:23.172 [2024-12-16 13:22:37.691429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.172 [2024-12-16 13:22:37.691437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:23.172 [2024-12-16 13:22:37.691448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:18:23.172 [2024-12-16 13:22:37.691460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.172 [2024-12-16 13:22:37.706138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.172 [2024-12-16 13:22:37.706178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:23.172 [2024-12-16 13:22:37.706190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.644 ms 00:18:23.172 [2024-12-16 13:22:37.706199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.172 [2024-12-16 13:22:37.706445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.172 [2024-12-16 13:22:37.706457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:23.172 [2024-12-16 13:22:37.706474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:18:23.172 [2024-12-16 13:22:37.706482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.748043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.748223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.433 [2024-12-16 13:22:37.748242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.748252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.748317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.748326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.433 [2024-12-16 13:22:37.748342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.748350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.748440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.748453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.433 [2024-12-16 13:22:37.748462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.748471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.748488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.748497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.433 [2024-12-16 13:22:37.748506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.748519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.836260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.836318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.433 [2024-12-16 13:22:37.836332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.836341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.871354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.433 [2024-12-16 13:22:37.871375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.871392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.871488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.433 [2024-12-16 13:22:37.871498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.871506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.871565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.433 [2024-12-16 13:22:37.871574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.871585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.871751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.433 [2024-12-16 13:22:37.871760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.871769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.871826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:23.433 [2024-12-16 13:22:37.871835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.871844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.871910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.433 [2024-12-16 13:22:37.871919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.871927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.871989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.433 [2024-12-16 13:22:37.872001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.433 [2024-12-16 13:22:37.872010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.433 [2024-12-16 13:22:37.872021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.433 [2024-12-16 13:22:37.872187] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.080 ms, result 0 00:18:24.820 00:18:24.820 00:18:24.820 13:22:39 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:24.820 [2024-12-16 13:22:39.139395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:24.820 [2024-12-16 13:22:39.139772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73501 ] 00:18:24.820 [2024-12-16 13:22:39.291223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.082 [2024-12-16 13:22:39.541322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.343 [2024-12-16 13:22:39.866696] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:25.343 [2024-12-16 13:22:39.866781] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:25.605 [2024-12-16 13:22:40.024338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.024460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.605 [2024-12-16 13:22:40.024498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:25.605 [2024-12-16 13:22:40.024529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.024715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.024751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.605 [2024-12-16 13:22:40.024796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:25.605 [2024-12-16 13:22:40.024818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.024877] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.605 [2024-12-16 13:22:40.026317] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.605 [2024-12-16 13:22:40.026354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.026363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.605 [2024-12-16 13:22:40.026374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:18:25.605 [2024-12-16 13:22:40.026382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.028724] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:25.605 [2024-12-16 13:22:40.044382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.044644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:25.605 [2024-12-16 13:22:40.044670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.660 ms 00:18:25.605 [2024-12-16 13:22:40.044682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.044765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.044776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:25.605 [2024-12-16 13:22:40.044786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:25.605 [2024-12-16 13:22:40.044793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.056587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.056819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.605 [2024-12-16 13:22:40.056842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.706 ms 00:18:25.605 [2024-12-16 13:22:40.056851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.056961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.056972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.605 [2024-12-16 13:22:40.056982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:25.605 [2024-12-16 13:22:40.056993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.057062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.057073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.605 [2024-12-16 13:22:40.057083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:25.605 [2024-12-16 13:22:40.057092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.057126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.605 [2024-12-16 13:22:40.061996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.062056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.605 [2024-12-16 13:22:40.062069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:18:25.605 [2024-12-16 13:22:40.062078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.062125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.062134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.605 [2024-12-16 13:22:40.062144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:25.605 [2024-12-16 13:22:40.062155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.062199] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:25.605 [2024-12-16 13:22:40.062227] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:25.605 [2024-12-16 13:22:40.062267] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:25.605 [2024-12-16 13:22:40.062285] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:25.605 [2024-12-16 13:22:40.062366] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:25.605 [2024-12-16 13:22:40.062380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.605 [2024-12-16 13:22:40.062394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:25.605 [2024-12-16 13:22:40.062406] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.605 [2024-12-16 13:22:40.062416] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.605 [2024-12-16 13:22:40.062426] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:25.605 [2024-12-16 13:22:40.062435] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.605 [2024-12-16 13:22:40.062444] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:25.605 [2024-12-16 13:22:40.062452] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:25.605 [2024-12-16 13:22:40.062462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.062470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.605 [2024-12-16 13:22:40.062479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:25.605 [2024-12-16 13:22:40.062488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.062552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.605 [2024-12-16 13:22:40.062562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.605 [2024-12-16 13:22:40.062572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:25.605 [2024-12-16 13:22:40.062581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.605 [2024-12-16 13:22:40.062678] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.605 [2024-12-16 13:22:40.062691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.605 [2024-12-16 13:22:40.062700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.605 [2024-12-16 13:22:40.062710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.605 [2024-12-16 13:22:40.062725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:25.605 [2024-12-16 13:22:40.062743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.605 [2024-12-16 13:22:40.062752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.605 [2024-12-16 13:22:40.062768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.605 [2024-12-16 13:22:40.062777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:25.605 [2024-12-16 13:22:40.062785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.605 [2024-12-16 13:22:40.062797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.605 [2024-12-16 13:22:40.062804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:25.605 [2024-12-16 13:22:40.062811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.605 [2024-12-16 13:22:40.062836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:25.605 [2024-12-16 13:22:40.062845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:25.605 [2024-12-16 13:22:40.062860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:25.605 [2024-12-16 13:22:40.062868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:25.605 [2024-12-16 13:22:40.062875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.605 [2024-12-16 13:22:40.062883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:25.605 [2024-12-16 13:22:40.062909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.605 [2024-12-16 13:22:40.062917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:25.605 [2024-12-16 13:22:40.062924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:25.606 [2024-12-16 13:22:40.062932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.606 [2024-12-16 13:22:40.062940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:25.606 [2024-12-16 13:22:40.062947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:25.606 [2024-12-16 13:22:40.062954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.606 [2024-12-16 13:22:40.062961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:25.606 [2024-12-16 13:22:40.062968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:25.606 [2024-12-16 13:22:40.062975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.606 [2024-12-16 13:22:40.062981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:25.606 [2024-12-16 13:22:40.062989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.606 [2024-12-16 13:22:40.062997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.606 [2024-12-16 13:22:40.063004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:25.606 [2024-12-16 13:22:40.063011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.606 [2024-12-16 13:22:40.063017] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.606 [2024-12-16 13:22:40.063028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.606 [2024-12-16 13:22:40.063036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.606 [2024-12-16 13:22:40.063043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.606 [2024-12-16 13:22:40.063053] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.606 [2024-12-16 13:22:40.063063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.606 [2024-12-16 13:22:40.063071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.606 [2024-12-16 13:22:40.063078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.606 [2024-12-16 13:22:40.063085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.606 [2024-12-16 13:22:40.063093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.606 [2024-12-16 13:22:40.063101] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.606 [2024-12-16 13:22:40.063111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.606 [2024-12-16 13:22:40.063121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:25.606 [2024-12-16 13:22:40.063128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:25.606 [2024-12-16 13:22:40.063136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:25.606 [2024-12-16 13:22:40.063143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:25.606 [2024-12-16 13:22:40.063151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:25.606 [2024-12-16 13:22:40.063158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:25.606 [2024-12-16 13:22:40.063165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:25.606 [2024-12-16 13:22:40.063173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:25.606 [2024-12-16 13:22:40.063181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:25.606 [2024-12-16 13:22:40.063188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:25.606 [2024-12-16 13:22:40.063195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:25.606 [2024-12-16 13:22:40.063202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:25.606 [2024-12-16 13:22:40.063209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:25.606 [2024-12-16 13:22:40.063217] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.606 [2024-12-16 13:22:40.063226] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.606 [2024-12-16 13:22:40.063236] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.606 [2024-12-16 13:22:40.063244] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.606 [2024-12-16 13:22:40.063251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.606 [2024-12-16 13:22:40.063259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.606 [2024-12-16 13:22:40.063270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.063278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.606 [2024-12-16 13:22:40.063287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:18:25.606 [2024-12-16 13:22:40.063295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.085437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.085494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.606 [2024-12-16 13:22:40.085508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.096 ms 00:18:25.606 [2024-12-16 13:22:40.085524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.085649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.085661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:25.606 [2024-12-16 13:22:40.085670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:25.606 [2024-12-16 13:22:40.085679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.133480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.133874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.606 [2024-12-16 13:22:40.133901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.741 ms 00:18:25.606 [2024-12-16 13:22:40.133911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.133971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.133982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.606 [2024-12-16 13:22:40.133992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:25.606 [2024-12-16 13:22:40.134001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.134784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.134818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.606 [2024-12-16 13:22:40.134831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:18:25.606 [2024-12-16 13:22:40.134848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.134990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.135001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.606 [2024-12-16 13:22:40.135010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:25.606 [2024-12-16 13:22:40.135018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.154492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.154543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.606 [2024-12-16 13:22:40.154555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.447 ms 00:18:25.606 [2024-12-16 13:22:40.154564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.606 [2024-12-16 13:22:40.169978] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:25.606 [2024-12-16 13:22:40.170050] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:25.606 [2024-12-16 13:22:40.170065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.606 [2024-12-16 13:22:40.170075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:25.606 [2024-12-16 13:22:40.170085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.348 ms 00:18:25.606 [2024-12-16 13:22:40.170093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.196784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.196839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:25.866 [2024-12-16 13:22:40.196853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.627 ms 00:18:25.866 [2024-12-16 13:22:40.196861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.210560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.210611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:25.866 [2024-12-16 13:22:40.210624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.640 ms 00:18:25.866 [2024-12-16 13:22:40.210649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.224016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.224242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:25.866 [2024-12-16 13:22:40.224266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.315 ms 00:18:25.866 [2024-12-16 13:22:40.224274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.224709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.224726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:25.866 [2024-12-16 13:22:40.224739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:25.866 [2024-12-16 13:22:40.224748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.297371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.297436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:25.866 [2024-12-16 13:22:40.297453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.600 ms 00:18:25.866 [2024-12-16 13:22:40.297463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.309682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:25.866 [2024-12-16 13:22:40.313761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.313809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:25.866 [2024-12-16 13:22:40.313823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.230 ms 00:18:25.866 [2024-12-16 13:22:40.313839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.313928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.313940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:25.866 [2024-12-16 13:22:40.313951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:25.866 [2024-12-16 13:22:40.313959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.314035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.314046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:25.866 [2024-12-16 13:22:40.314056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:25.866 [2024-12-16 13:22:40.314065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.315650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.315696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:25.866 [2024-12-16 13:22:40.315709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:18:25.866 [2024-12-16 13:22:40.315717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.315760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.315769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:25.866 [2024-12-16 13:22:40.315785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:25.866 [2024-12-16 13:22:40.315794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.315840] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:25.866 [2024-12-16 13:22:40.315852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.315863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:25.866 [2024-12-16 13:22:40.315872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:25.866 [2024-12-16 13:22:40.315881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.343721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.343776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:25.866 [2024-12-16 13:22:40.343790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.818 ms 00:18:25.866 [2024-12-16 13:22:40.343799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.343903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.866 [2024-12-16 13:22:40.343914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:25.866 [2024-12-16 13:22:40.343925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:25.866 [2024-12-16 13:22:40.343934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.866 [2024-12-16 13:22:40.345458] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.582 ms, result 0 00:18:27.311  [2024-12-16T13:22:42.827Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T13:22:43.771Z] Copying: 32/1024 [MB] (17 MBps) [2024-12-16T13:22:44.716Z] Copying: 49/1024 [MB] (17 MBps) [2024-12-16T13:22:45.662Z] Copying: 62/1024 [MB] (12 MBps) [2024-12-16T13:22:46.607Z] Copying: 79/1024 [MB] (17 MBps) [2024-12-16T13:22:47.551Z] Copying: 94/1024 [MB] (15 MBps) [2024-12-16T13:22:48.940Z] Copying: 107/1024 [MB] (12 MBps) [2024-12-16T13:22:49.884Z] Copying: 123/1024 [MB] (16 MBps) [2024-12-16T13:22:50.828Z] Copying: 134/1024 [MB] (10 MBps) [2024-12-16T13:22:51.773Z] Copying: 154/1024 [MB] (19 MBps) [2024-12-16T13:22:52.716Z] Copying: 173/1024 [MB] (19 MBps) [2024-12-16T13:22:53.658Z] Copying: 184/1024 [MB] (10 MBps) [2024-12-16T13:22:54.603Z] Copying: 199/1024 [MB] (15 MBps) [2024-12-16T13:22:55.547Z] Copying: 216/1024 [MB] (17 MBps) [2024-12-16T13:22:56.933Z] Copying: 233/1024 [MB] (17 MBps) [2024-12-16T13:22:57.877Z] Copying: 248/1024 [MB] (14 MBps) [2024-12-16T13:22:58.823Z] Copying: 263/1024 [MB] (14 MBps) [2024-12-16T13:22:59.766Z] Copying: 278/1024 [MB] (15 MBps) [2024-12-16T13:23:00.708Z] Copying: 294/1024 [MB] (15 MBps) [2024-12-16T13:23:01.650Z] Copying: 305/1024 [MB] (11 MBps) [2024-12-16T13:23:02.592Z] Copying: 318/1024 [MB] (13 MBps) [2024-12-16T13:23:03.536Z] Copying: 333/1024 [MB] (14 MBps) [2024-12-16T13:23:04.988Z] Copying: 351/1024 [MB] (18 MBps) [2024-12-16T13:23:05.561Z] Copying: 367/1024 [MB] (15 MBps) [2024-12-16T13:23:06.949Z] Copying: 378/1024 [MB] (10 MBps) [2024-12-16T13:23:07.893Z] Copying: 391/1024 [MB] (13 MBps) [2024-12-16T13:23:08.836Z] Copying: 403/1024 [MB] (11 MBps) [2024-12-16T13:23:09.781Z] Copying: 420/1024 [MB] (16 MBps) [2024-12-16T13:23:10.726Z] Copying: 434/1024 [MB] (14 MBps) [2024-12-16T13:23:11.670Z] Copying: 445/1024 [MB] (10 MBps) [2024-12-16T13:23:12.613Z] Copying: 462/1024 [MB] (17 MBps) [2024-12-16T13:23:13.557Z] Copying: 484/1024 [MB] (21 MBps) [2024-12-16T13:23:14.946Z] Copying: 496/1024 [MB] (12 MBps) [2024-12-16T13:23:15.890Z] Copying: 511/1024 [MB] (14 MBps) [2024-12-16T13:23:16.834Z] Copying: 524/1024 [MB] (13 MBps) [2024-12-16T13:23:17.774Z] Copying: 552/1024 [MB] (27 MBps) [2024-12-16T13:23:18.723Z] Copying: 577/1024 [MB] (25 MBps) [2024-12-16T13:23:19.667Z] Copying: 600/1024 [MB] (22 MBps) [2024-12-16T13:23:20.610Z] Copying: 624/1024 [MB] (23 MBps) [2024-12-16T13:23:21.554Z] Copying: 646/1024 [MB] (22 MBps) [2024-12-16T13:23:22.941Z] Copying: 659/1024 [MB] (12 MBps) [2024-12-16T13:23:23.886Z] Copying: 681/1024 [MB] (22 MBps) [2024-12-16T13:23:24.831Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-16T13:23:25.773Z] Copying: 703/1024 [MB] (11 MBps) [2024-12-16T13:23:26.716Z] Copying: 716/1024 [MB] (12 MBps) [2024-12-16T13:23:27.711Z] Copying: 728/1024 [MB] (12 MBps) [2024-12-16T13:23:28.676Z] Copying: 746/1024 [MB] (17 MBps) [2024-12-16T13:23:29.620Z] Copying: 767/1024 [MB] (21 MBps) [2024-12-16T13:23:30.565Z] Copying: 789/1024 [MB] (22 MBps) [2024-12-16T13:23:31.953Z] Copying: 805/1024 [MB] (16 MBps) [2024-12-16T13:23:32.897Z] Copying: 825/1024 [MB] (20 MBps) [2024-12-16T13:23:33.840Z] Copying: 842/1024 [MB] (16 MBps) [2024-12-16T13:23:34.779Z] Copying: 861/1024 [MB] (18 MBps) [2024-12-16T13:23:35.720Z] Copying: 880/1024 [MB] (19 MBps) [2024-12-16T13:23:36.664Z] Copying: 898/1024 [MB] (18 MBps) [2024-12-16T13:23:37.605Z] Copying: 919/1024 [MB] (20 MBps) [2024-12-16T13:23:38.547Z] Copying: 937/1024 [MB] (18 MBps) [2024-12-16T13:23:39.931Z] Copying: 954/1024 [MB] (16 MBps) [2024-12-16T13:23:40.871Z] Copying: 977/1024 [MB] (23 MBps) [2024-12-16T13:23:41.815Z] Copying: 997/1024 [MB] (19 MBps) [2024-12-16T13:23:42.076Z] Copying: 1017/1024 [MB] (20 MBps) [2024-12-16T13:23:42.338Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 13:23:42.078530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.764 [2024-12-16 13:23:42.078653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.764 [2024-12-16 13:23:42.078674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.764 [2024-12-16 13:23:42.078684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.764 [2024-12-16 13:23:42.078714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.764 [2024-12-16 13:23:42.082847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.764 [2024-12-16 13:23:42.082909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.764 [2024-12-16 13:23:42.082924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.113 ms 00:19:27.764 [2024-12-16 13:23:42.082933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.764 [2024-12-16 13:23:42.083241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.764 [2024-12-16 13:23:42.083260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.764 [2024-12-16 13:23:42.083270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:27.764 [2024-12-16 13:23:42.083278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.764 [2024-12-16 13:23:42.086769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.764 [2024-12-16 13:23:42.086787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.764 [2024-12-16 13:23:42.086801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.476 ms 00:19:27.764 [2024-12-16 13:23:42.086810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.764 [2024-12-16 13:23:42.093752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.764 [2024-12-16 13:23:42.093781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:27.764 [2024-12-16 13:23:42.093790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.925 ms 00:19:27.764 [2024-12-16 13:23:42.093798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.764 [2024-12-16 13:23:42.119550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.764 [2024-12-16 13:23:42.119586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.765 [2024-12-16 13:23:42.119597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.691 ms 00:19:27.765 [2024-12-16 13:23:42.119605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.134458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.765 [2024-12-16 13:23:42.134491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.765 [2024-12-16 13:23:42.134502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.804 ms 00:19:27.765 [2024-12-16 13:23:42.134514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.134671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.765 [2024-12-16 13:23:42.134682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.765 [2024-12-16 13:23:42.134691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:19:27.765 [2024-12-16 13:23:42.134699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.159112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.765 [2024-12-16 13:23:42.159146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:27.765 [2024-12-16 13:23:42.159156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.399 ms 00:19:27.765 [2024-12-16 13:23:42.159163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.183396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.765 [2024-12-16 13:23:42.183430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:27.765 [2024-12-16 13:23:42.183452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.198 ms 00:19:27.765 [2024-12-16 13:23:42.183458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.207241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.765 [2024-12-16 13:23:42.207278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.765 [2024-12-16 13:23:42.207288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.747 ms 00:19:27.765 [2024-12-16 13:23:42.207294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.231520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.765 [2024-12-16 13:23:42.231560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.765 [2024-12-16 13:23:42.231571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.148 ms 00:19:27.765 [2024-12-16 13:23:42.231578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.765 [2024-12-16 13:23:42.231618] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.765 [2024-12-16 13:23:42.231658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.231993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.765 [2024-12-16 13:23:42.232140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.766 [2024-12-16 13:23:42.232470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.766 [2024-12-16 13:23:42.232478] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3 00:19:27.766 [2024-12-16 13:23:42.232486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.766 [2024-12-16 13:23:42.232494] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.766 [2024-12-16 13:23:42.232502] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.766 [2024-12-16 13:23:42.232510] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.766 [2024-12-16 13:23:42.232517] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.766 [2024-12-16 13:23:42.232525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.766 [2024-12-16 13:23:42.232534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.766 [2024-12-16 13:23:42.232547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.766 [2024-12-16 13:23:42.232554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.766 [2024-12-16 13:23:42.232561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.766 [2024-12-16 13:23:42.232569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.766 [2024-12-16 13:23:42.232582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:19:27.766 [2024-12-16 13:23:42.232590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.766 [2024-12-16 13:23:42.246748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.766 [2024-12-16 13:23:42.246785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.766 [2024-12-16 13:23:42.246795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:19:27.766 [2024-12-16 13:23:42.246803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.766 [2024-12-16 13:23:42.247035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.766 [2024-12-16 13:23:42.247052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.766 [2024-12-16 13:23:42.247061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:27.766 [2024-12-16 13:23:42.247068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.766 [2024-12-16 13:23:42.288554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.766 [2024-12-16 13:23:42.288608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.766 [2024-12-16 13:23:42.288620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.766 [2024-12-16 13:23:42.288647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.766 [2024-12-16 13:23:42.288723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.766 [2024-12-16 13:23:42.288740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.766 [2024-12-16 13:23:42.288749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.766 [2024-12-16 13:23:42.288757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.766 [2024-12-16 13:23:42.288858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.766 [2024-12-16 13:23:42.288869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.766 [2024-12-16 13:23:42.288878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.766 [2024-12-16 13:23:42.288886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.766 [2024-12-16 13:23:42.288904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.766 [2024-12-16 13:23:42.288913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.766 [2024-12-16 13:23:42.288926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.766 [2024-12-16 13:23:42.288934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.027 [2024-12-16 13:23:42.374552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.027 [2024-12-16 13:23:42.374593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.027 [2024-12-16 13:23:42.374603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.374611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.404882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.404916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.028 [2024-12-16 13:23:42.404931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.404939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.404998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.405007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.028 [2024-12-16 13:23:42.405016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.405023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.405065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.405073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.028 [2024-12-16 13:23:42.405082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.405093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.405184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.405194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.028 [2024-12-16 13:23:42.405202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.405210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.405239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.405248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.028 [2024-12-16 13:23:42.405256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.405264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.405306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.405315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.028 [2024-12-16 13:23:42.405323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.405330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.405375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.028 [2024-12-16 13:23:42.405384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.028 [2024-12-16 13:23:42.405393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.028 [2024-12-16 13:23:42.405403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.028 [2024-12-16 13:23:42.405537] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 326.989 ms, result 0 00:19:28.970 00:19:28.970 00:19:28.970 13:23:43 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:31.512 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:31.512 13:23:45 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:31.512 [2024-12-16 13:23:45.731966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.512 [2024-12-16 13:23:45.732345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:19:31.512 [2024-12-16 13:23:45.886042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.772 [2024-12-16 13:23:46.161469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.032 [2024-12-16 13:23:46.490732] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.032 [2024-12-16 13:23:46.490826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.294 [2024-12-16 13:23:46.652251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.652503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.294 [2024-12-16 13:23:46.652532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:32.294 [2024-12-16 13:23:46.652549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.652656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.652670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.294 [2024-12-16 13:23:46.652681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:32.294 [2024-12-16 13:23:46.652689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.652715] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.294 [2024-12-16 13:23:46.653476] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.294 [2024-12-16 13:23:46.653509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.653518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.294 [2024-12-16 13:23:46.653528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:19:32.294 [2024-12-16 13:23:46.653536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.655836] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.294 [2024-12-16 13:23:46.671186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.671389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.294 [2024-12-16 13:23:46.671415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.354 ms 00:19:32.294 [2024-12-16 13:23:46.671424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.671583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.671613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.294 [2024-12-16 13:23:46.671624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:32.294 [2024-12-16 13:23:46.671662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.683585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.683656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.294 [2024-12-16 13:23:46.683669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.834 ms 00:19:32.294 [2024-12-16 13:23:46.683678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.683785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.683798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.294 [2024-12-16 13:23:46.683807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:32.294 [2024-12-16 13:23:46.683815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.683879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.683889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.294 [2024-12-16 13:23:46.683898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:32.294 [2024-12-16 13:23:46.683906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.683940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.294 [2024-12-16 13:23:46.688772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.688814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.294 [2024-12-16 13:23:46.688826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:19:32.294 [2024-12-16 13:23:46.688834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.688877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.688885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.294 [2024-12-16 13:23:46.688895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:32.294 [2024-12-16 13:23:46.688906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.688946] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.294 [2024-12-16 13:23:46.688974] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:32.294 [2024-12-16 13:23:46.689013] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.294 [2024-12-16 13:23:46.689029] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:32.294 [2024-12-16 13:23:46.689109] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:32.294 [2024-12-16 13:23:46.689120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.294 [2024-12-16 13:23:46.689135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:32.294 [2024-12-16 13:23:46.689147] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689157] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689165] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:32.294 [2024-12-16 13:23:46.689173] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.294 [2024-12-16 13:23:46.689181] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:32.294 [2024-12-16 13:23:46.689189] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:32.294 [2024-12-16 13:23:46.689197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.689205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.294 [2024-12-16 13:23:46.689214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:19:32.294 [2024-12-16 13:23:46.689222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.689286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.294 [2024-12-16 13:23:46.689296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.294 [2024-12-16 13:23:46.689304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:32.294 [2024-12-16 13:23:46.689312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.294 [2024-12-16 13:23:46.689385] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.294 [2024-12-16 13:23:46.689395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.294 [2024-12-16 13:23:46.689404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.294 [2024-12-16 13:23:46.689428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.294 [2024-12-16 13:23:46.689452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.294 [2024-12-16 13:23:46.689466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.294 [2024-12-16 13:23:46.689473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:32.294 [2024-12-16 13:23:46.689482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.294 [2024-12-16 13:23:46.689490] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.294 [2024-12-16 13:23:46.689513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:32.294 [2024-12-16 13:23:46.689520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.294 [2024-12-16 13:23:46.689544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:32.294 [2024-12-16 13:23:46.689551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:32.294 [2024-12-16 13:23:46.689565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:32.294 [2024-12-16 13:23:46.689573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.294 [2024-12-16 13:23:46.689588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.294 [2024-12-16 13:23:46.689611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:32.294 [2024-12-16 13:23:46.689618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:32.294 [2024-12-16 13:23:46.689624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.294 [2024-12-16 13:23:46.689668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:32.295 [2024-12-16 13:23:46.689675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:32.295 [2024-12-16 13:23:46.689682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.295 [2024-12-16 13:23:46.689690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:32.295 [2024-12-16 13:23:46.689697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:32.295 [2024-12-16 13:23:46.689704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.295 [2024-12-16 13:23:46.689711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:32.295 [2024-12-16 13:23:46.689720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.295 [2024-12-16 13:23:46.689728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.295 [2024-12-16 13:23:46.689736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:32.295 [2024-12-16 13:23:46.689742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.295 [2024-12-16 13:23:46.689749] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.295 [2024-12-16 13:23:46.689762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.295 [2024-12-16 13:23:46.689770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.295 [2024-12-16 13:23:46.689780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.295 [2024-12-16 13:23:46.689789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.295 [2024-12-16 13:23:46.689796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.295 [2024-12-16 13:23:46.689803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.295 [2024-12-16 13:23:46.689810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.295 [2024-12-16 13:23:46.689817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.295 [2024-12-16 13:23:46.689824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.295 [2024-12-16 13:23:46.689834] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.295 [2024-12-16 13:23:46.689844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.295 [2024-12-16 13:23:46.689853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:32.295 [2024-12-16 13:23:46.689861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:32.295 [2024-12-16 13:23:46.689869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:32.295 [2024-12-16 13:23:46.689876] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:32.295 [2024-12-16 13:23:46.689884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:32.295 [2024-12-16 13:23:46.689891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:32.295 [2024-12-16 13:23:46.689898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:32.295 [2024-12-16 13:23:46.689906] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:32.295 [2024-12-16 13:23:46.689914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:32.295 [2024-12-16 13:23:46.689921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:32.295 [2024-12-16 13:23:46.689928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:32.295 [2024-12-16 13:23:46.689935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:32.295 [2024-12-16 13:23:46.689944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:32.295 [2024-12-16 13:23:46.689951] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.295 [2024-12-16 13:23:46.689960] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.295 [2024-12-16 13:23:46.689968] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.295 [2024-12-16 13:23:46.689976] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.295 [2024-12-16 13:23:46.689991] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.295 [2024-12-16 13:23:46.690000] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.295 [2024-12-16 13:23:46.690008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.690017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.295 [2024-12-16 13:23:46.690025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:19:32.295 [2024-12-16 13:23:46.690033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.712457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.712655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.295 [2024-12-16 13:23:46.712720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:19:32.295 [2024-12-16 13:23:46.712753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.712868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.712890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.295 [2024-12-16 13:23:46.712910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:32.295 [2024-12-16 13:23:46.712974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.761125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.761332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.295 [2024-12-16 13:23:46.761403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.071 ms 00:19:32.295 [2024-12-16 13:23:46.761428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.761512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.761540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.295 [2024-12-16 13:23:46.761561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:32.295 [2024-12-16 13:23:46.761580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.762347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.762410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.295 [2024-12-16 13:23:46.762434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:32.295 [2024-12-16 13:23:46.762546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.762733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.762760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.295 [2024-12-16 13:23:46.762782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:32.295 [2024-12-16 13:23:46.762847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.782441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.782619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.295 [2024-12-16 13:23:46.782697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.552 ms 00:19:32.295 [2024-12-16 13:23:46.782721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.798243] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:32.295 [2024-12-16 13:23:46.798442] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.295 [2024-12-16 13:23:46.798508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.798530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.295 [2024-12-16 13:23:46.798552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.648 ms 00:19:32.295 [2024-12-16 13:23:46.798571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.825577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.825767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.295 [2024-12-16 13:23:46.825831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.951 ms 00:19:32.295 [2024-12-16 13:23:46.825855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.839526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.839754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.295 [2024-12-16 13:23:46.840222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.531 ms 00:19:32.295 [2024-12-16 13:23:46.840252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.853731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.853793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.295 [2024-12-16 13:23:46.853807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.414 ms 00:19:32.295 [2024-12-16 13:23:46.853815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-12-16 13:23:46.854231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-12-16 13:23:46.854244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.295 [2024-12-16 13:23:46.854255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:32.295 [2024-12-16 13:23:46.854264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.927731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.927795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.555 [2024-12-16 13:23:46.927810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.448 ms 00:19:32.555 [2024-12-16 13:23:46.927819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.940181] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:32.555 [2024-12-16 13:23:46.944235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.944284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.555 [2024-12-16 13:23:46.944297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.353 ms 00:19:32.555 [2024-12-16 13:23:46.944313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.944395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.944406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.555 [2024-12-16 13:23:46.944416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:32.555 [2024-12-16 13:23:46.944426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.944505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.944516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.555 [2024-12-16 13:23:46.944525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:32.555 [2024-12-16 13:23:46.944534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.946135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.946184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:32.555 [2024-12-16 13:23:46.946196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:19:32.555 [2024-12-16 13:23:46.946204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.946244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.946254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.555 [2024-12-16 13:23:46.946269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:32.555 [2024-12-16 13:23:46.946277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.946321] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.555 [2024-12-16 13:23:46.946332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.946344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.555 [2024-12-16 13:23:46.946353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:32.555 [2024-12-16 13:23:46.946362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.974133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.974331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.555 [2024-12-16 13:23:46.974356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.751 ms 00:19:32.555 [2024-12-16 13:23:46.974365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.974460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-12-16 13:23:46.974470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.555 [2024-12-16 13:23:46.974480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:32.555 [2024-12-16 13:23:46.974488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-12-16 13:23:46.975976] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 323.140 ms, result 0 00:19:33.496  [2024-12-16T13:23:49.013Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-16T13:23:50.400Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-16T13:23:51.047Z] Copying: 47/1024 [MB] (18 MBps) [2024-12-16T13:23:51.992Z] Copying: 63/1024 [MB] (15 MBps) [2024-12-16T13:23:53.381Z] Copying: 82/1024 [MB] (18 MBps) [2024-12-16T13:23:54.327Z] Copying: 100/1024 [MB] (18 MBps) [2024-12-16T13:23:55.271Z] Copying: 113/1024 [MB] (12 MBps) [2024-12-16T13:23:56.215Z] Copying: 129/1024 [MB] (15 MBps) [2024-12-16T13:23:57.161Z] Copying: 150/1024 [MB] (20 MBps) [2024-12-16T13:23:58.108Z] Copying: 169/1024 [MB] (19 MBps) [2024-12-16T13:23:59.060Z] Copying: 185/1024 [MB] (16 MBps) [2024-12-16T13:24:00.006Z] Copying: 213/1024 [MB] (27 MBps) [2024-12-16T13:24:01.393Z] Copying: 234/1024 [MB] (20 MBps) [2024-12-16T13:24:02.335Z] Copying: 255/1024 [MB] (20 MBps) [2024-12-16T13:24:03.279Z] Copying: 273/1024 [MB] (18 MBps) [2024-12-16T13:24:04.223Z] Copying: 306/1024 [MB] (32 MBps) [2024-12-16T13:24:05.166Z] Copying: 321/1024 [MB] (15 MBps) [2024-12-16T13:24:06.106Z] Copying: 347/1024 [MB] (25 MBps) [2024-12-16T13:24:07.049Z] Copying: 368/1024 [MB] (20 MBps) [2024-12-16T13:24:07.993Z] Copying: 388/1024 [MB] (20 MBps) [2024-12-16T13:24:09.379Z] Copying: 412/1024 [MB] (23 MBps) [2024-12-16T13:24:10.323Z] Copying: 431/1024 [MB] (18 MBps) [2024-12-16T13:24:11.264Z] Copying: 451/1024 [MB] (20 MBps) [2024-12-16T13:24:12.205Z] Copying: 466/1024 [MB] (14 MBps) [2024-12-16T13:24:13.145Z] Copying: 495/1024 [MB] (29 MBps) [2024-12-16T13:24:14.117Z] Copying: 530/1024 [MB] (35 MBps) [2024-12-16T13:24:15.060Z] Copying: 542/1024 [MB] (12 MBps) [2024-12-16T13:24:16.002Z] Copying: 557/1024 [MB] (14 MBps) [2024-12-16T13:24:17.387Z] Copying: 594/1024 [MB] (36 MBps) [2024-12-16T13:24:18.331Z] Copying: 631/1024 [MB] (37 MBps) [2024-12-16T13:24:19.274Z] Copying: 660/1024 [MB] (28 MBps) [2024-12-16T13:24:20.218Z] Copying: 679/1024 [MB] (19 MBps) [2024-12-16T13:24:21.159Z] Copying: 700/1024 [MB] (20 MBps) [2024-12-16T13:24:22.103Z] Copying: 726/1024 [MB] (26 MBps) [2024-12-16T13:24:23.047Z] Copying: 749/1024 [MB] (22 MBps) [2024-12-16T13:24:23.991Z] Copying: 774/1024 [MB] (25 MBps) [2024-12-16T13:24:25.377Z] Copying: 790/1024 [MB] (16 MBps) [2024-12-16T13:24:26.317Z] Copying: 802/1024 [MB] (12 MBps) [2024-12-16T13:24:27.262Z] Copying: 817/1024 [MB] (14 MBps) [2024-12-16T13:24:28.206Z] Copying: 832/1024 [MB] (15 MBps) [2024-12-16T13:24:29.151Z] Copying: 846/1024 [MB] (13 MBps) [2024-12-16T13:24:30.094Z] Copying: 864/1024 [MB] (17 MBps) [2024-12-16T13:24:31.038Z] Copying: 874/1024 [MB] (10 MBps) [2024-12-16T13:24:32.425Z] Copying: 889/1024 [MB] (15 MBps) [2024-12-16T13:24:32.996Z] Copying: 900/1024 [MB] (10 MBps) [2024-12-16T13:24:34.384Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-16T13:24:35.327Z] Copying: 931/1024 [MB] (20 MBps) [2024-12-16T13:24:36.271Z] Copying: 946/1024 [MB] (15 MBps) [2024-12-16T13:24:37.253Z] Copying: 961/1024 [MB] (15 MBps) [2024-12-16T13:24:38.199Z] Copying: 980/1024 [MB] (18 MBps) [2024-12-16T13:24:39.141Z] Copying: 991/1024 [MB] (10 MBps) [2024-12-16T13:24:40.084Z] Copying: 1013/1024 [MB] (22 MBps) [2024-12-16T13:24:40.656Z] Copying: 1048052/1048576 [kB] (9852 kBps) [2024-12-16T13:24:40.656Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-16 13:24:40.434823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.434901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.082 [2024-12-16 13:24:40.434918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.082 [2024-12-16 13:24:40.434928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.082 [2024-12-16 13:24:40.438471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.082 [2024-12-16 13:24:40.445244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.445285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.082 [2024-12-16 13:24:40.445298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.728 ms 00:20:26.082 [2024-12-16 13:24:40.445307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.082 [2024-12-16 13:24:40.456249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.456292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.082 [2024-12-16 13:24:40.456315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.996 ms 00:20:26.082 [2024-12-16 13:24:40.456323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.082 [2024-12-16 13:24:40.477137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.477170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.082 [2024-12-16 13:24:40.477181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.796 ms 00:20:26.082 [2024-12-16 13:24:40.477190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.082 [2024-12-16 13:24:40.483301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.483432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:26.082 [2024-12-16 13:24:40.483449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:20:26.082 [2024-12-16 13:24:40.483463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.082 [2024-12-16 13:24:40.509068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.509101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.082 [2024-12-16 13:24:40.509112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.549 ms 00:20:26.082 [2024-12-16 13:24:40.509119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.082 [2024-12-16 13:24:40.523559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.082 [2024-12-16 13:24:40.523592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.082 [2024-12-16 13:24:40.523602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.408 ms 00:20:26.082 [2024-12-16 13:24:40.523610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.344 [2024-12-16 13:24:40.787836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.344 [2024-12-16 13:24:40.787892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.344 [2024-12-16 13:24:40.787905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 264.176 ms 00:20:26.344 [2024-12-16 13:24:40.787915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.344 [2024-12-16 13:24:40.815136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.344 [2024-12-16 13:24:40.815189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:26.344 [2024-12-16 13:24:40.815202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.197 ms 00:20:26.344 [2024-12-16 13:24:40.815210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.344 [2024-12-16 13:24:40.842155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.344 [2024-12-16 13:24:40.842208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:26.344 [2024-12-16 13:24:40.842238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.896 ms 00:20:26.344 [2024-12-16 13:24:40.842246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.344 [2024-12-16 13:24:40.868324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.344 [2024-12-16 13:24:40.868377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.344 [2024-12-16 13:24:40.868390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.029 ms 00:20:26.344 [2024-12-16 13:24:40.868397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.344 [2024-12-16 13:24:40.894417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.344 [2024-12-16 13:24:40.894472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.344 [2024-12-16 13:24:40.894486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.906 ms 00:20:26.344 [2024-12-16 13:24:40.894494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.344 [2024-12-16 13:24:40.894578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.344 [2024-12-16 13:24:40.894603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99840 / 261120 wr_cnt: 1 state: open 00:20:26.344 [2024-12-16 13:24:40.894617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.344 [2024-12-16 13:24:40.894957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.894966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.894973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.894981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.894989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.894997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.345 [2024-12-16 13:24:40.895478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.345 [2024-12-16 13:24:40.895488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3 00:20:26.345 [2024-12-16 13:24:40.895497] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99840 00:20:26.345 [2024-12-16 13:24:40.895506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100800 00:20:26.345 [2024-12-16 13:24:40.895514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99840 00:20:26.345 [2024-12-16 13:24:40.895528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:20:26.345 [2024-12-16 13:24:40.895537] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.345 [2024-12-16 13:24:40.895545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.345 [2024-12-16 13:24:40.895554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.345 [2024-12-16 13:24:40.895569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.345 [2024-12-16 13:24:40.895577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.345 [2024-12-16 13:24:40.895585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.345 [2024-12-16 13:24:40.895594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.345 [2024-12-16 13:24:40.895605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:20:26.345 [2024-12-16 13:24:40.895614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.345 [2024-12-16 13:24:40.910521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.345 [2024-12-16 13:24:40.910578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.345 [2024-12-16 13:24:40.910589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.836 ms 00:20:26.345 [2024-12-16 13:24:40.910597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.345 [2024-12-16 13:24:40.910853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.345 [2024-12-16 13:24:40.910863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.345 [2024-12-16 13:24:40.910873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:20:26.345 [2024-12-16 13:24:40.910881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:40.952812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:40.952864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.606 [2024-12-16 13:24:40.952878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:40.952887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:40.952966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:40.952975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.606 [2024-12-16 13:24:40.952985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:40.952994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:40.953087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:40.953105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.606 [2024-12-16 13:24:40.953115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:40.953124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:40.953142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:40.953150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.606 [2024-12-16 13:24:40.953158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:40.953166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:41.042670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:41.042735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.606 [2024-12-16 13:24:41.042749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:41.042759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:41.078472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:41.078525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.606 [2024-12-16 13:24:41.078539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:41.078548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:41.078664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:41.078677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.606 [2024-12-16 13:24:41.078696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:41.078705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:41.078756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:41.078766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.606 [2024-12-16 13:24:41.078777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.606 [2024-12-16 13:24:41.078786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.606 [2024-12-16 13:24:41.078905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.606 [2024-12-16 13:24:41.078917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.606 [2024-12-16 13:24:41.078926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.607 [2024-12-16 13:24:41.078938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.607 [2024-12-16 13:24:41.078974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.607 [2024-12-16 13:24:41.078984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:26.607 [2024-12-16 13:24:41.078993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.607 [2024-12-16 13:24:41.079001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.607 [2024-12-16 13:24:41.079052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.607 [2024-12-16 13:24:41.079064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.607 [2024-12-16 13:24:41.079072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.607 [2024-12-16 13:24:41.079084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.607 [2024-12-16 13:24:41.079148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.607 [2024-12-16 13:24:41.079159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.607 [2024-12-16 13:24:41.079168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.607 [2024-12-16 13:24:41.079177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.607 [2024-12-16 13:24:41.079345] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.652 ms, result 0 00:20:28.521 00:20:28.521 00:20:28.521 13:24:42 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:28.521 [2024-12-16 13:24:42.796161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:28.521 [2024-12-16 13:24:42.796528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74784 ] 00:20:28.521 [2024-12-16 13:24:42.951087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.782 [2024-12-16 13:24:43.229075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.044 [2024-12-16 13:24:43.558116] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.044 [2024-12-16 13:24:43.558215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.305 [2024-12-16 13:24:43.715797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.715865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.305 [2024-12-16 13:24:43.715882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.305 [2024-12-16 13:24:43.715894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.715957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.715967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.305 [2024-12-16 13:24:43.715977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:29.305 [2024-12-16 13:24:43.715984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.716006] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.305 [2024-12-16 13:24:43.716855] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.305 [2024-12-16 13:24:43.716876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.716885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.305 [2024-12-16 13:24:43.716895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:20:29.305 [2024-12-16 13:24:43.716904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.719481] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.305 [2024-12-16 13:24:43.735138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.735192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.305 [2024-12-16 13:24:43.735207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.661 ms 00:20:29.305 [2024-12-16 13:24:43.735216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.735305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.735315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.305 [2024-12-16 13:24:43.735324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:29.305 [2024-12-16 13:24:43.735331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.747203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.747253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.305 [2024-12-16 13:24:43.747264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.785 ms 00:20:29.305 [2024-12-16 13:24:43.747273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.747380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.747391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.305 [2024-12-16 13:24:43.747399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:29.305 [2024-12-16 13:24:43.747408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.747473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.747483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.305 [2024-12-16 13:24:43.747492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:29.305 [2024-12-16 13:24:43.747500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.305 [2024-12-16 13:24:43.747535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.305 [2024-12-16 13:24:43.752343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.305 [2024-12-16 13:24:43.752387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.305 [2024-12-16 13:24:43.752399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:20:29.306 [2024-12-16 13:24:43.752407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.306 [2024-12-16 13:24:43.752450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.306 [2024-12-16 13:24:43.752459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.306 [2024-12-16 13:24:43.752468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:29.306 [2024-12-16 13:24:43.752479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.306 [2024-12-16 13:24:43.752520] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.306 [2024-12-16 13:24:43.752546] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:29.306 [2024-12-16 13:24:43.752587] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.306 [2024-12-16 13:24:43.752605] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:29.306 [2024-12-16 13:24:43.752710] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:29.306 [2024-12-16 13:24:43.752723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.306 [2024-12-16 13:24:43.752738] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:29.306 [2024-12-16 13:24:43.752749] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.306 [2024-12-16 13:24:43.752760] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.306 [2024-12-16 13:24:43.752769] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:29.306 [2024-12-16 13:24:43.752777] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.306 [2024-12-16 13:24:43.752784] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:29.306 [2024-12-16 13:24:43.752792] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:29.306 [2024-12-16 13:24:43.752800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.306 [2024-12-16 13:24:43.752810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.306 [2024-12-16 13:24:43.752819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:29.306 [2024-12-16 13:24:43.752827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.306 [2024-12-16 13:24:43.752894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.306 [2024-12-16 13:24:43.752902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.306 [2024-12-16 13:24:43.752911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:29.306 [2024-12-16 13:24:43.752919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.306 [2024-12-16 13:24:43.752994] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.306 [2024-12-16 13:24:43.753005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.306 [2024-12-16 13:24:43.753015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.306 [2024-12-16 13:24:43.753037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.306 [2024-12-16 13:24:43.753057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.306 [2024-12-16 13:24:43.753070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.306 [2024-12-16 13:24:43.753077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:29.306 [2024-12-16 13:24:43.753083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.306 [2024-12-16 13:24:43.753093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.306 [2024-12-16 13:24:43.753101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:29.306 [2024-12-16 13:24:43.753109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.306 [2024-12-16 13:24:43.753132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:29.306 [2024-12-16 13:24:43.753139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:29.306 [2024-12-16 13:24:43.753154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:29.306 [2024-12-16 13:24:43.753163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.306 [2024-12-16 13:24:43.753176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.306 [2024-12-16 13:24:43.753196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.306 [2024-12-16 13:24:43.753216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.306 [2024-12-16 13:24:43.753237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.306 [2024-12-16 13:24:43.753259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.306 [2024-12-16 13:24:43.753271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.306 [2024-12-16 13:24:43.753278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:29.306 [2024-12-16 13:24:43.753284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.306 [2024-12-16 13:24:43.753289] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.306 [2024-12-16 13:24:43.753300] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.306 [2024-12-16 13:24:43.753308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.306 [2024-12-16 13:24:43.753324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.306 [2024-12-16 13:24:43.753336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.306 [2024-12-16 13:24:43.753343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.306 [2024-12-16 13:24:43.753350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.306 [2024-12-16 13:24:43.753357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.306 [2024-12-16 13:24:43.753364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.306 [2024-12-16 13:24:43.753372] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.306 [2024-12-16 13:24:43.753383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.306 [2024-12-16 13:24:43.753391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:29.306 [2024-12-16 13:24:43.753398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:29.306 [2024-12-16 13:24:43.753405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:29.306 [2024-12-16 13:24:43.753413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:29.306 [2024-12-16 13:24:43.753421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:29.306 [2024-12-16 13:24:43.753428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:29.306 [2024-12-16 13:24:43.753435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:29.306 [2024-12-16 13:24:43.753442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:29.306 [2024-12-16 13:24:43.753449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:29.306 [2024-12-16 13:24:43.753456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:29.306 [2024-12-16 13:24:43.753463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:29.306 [2024-12-16 13:24:43.753471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:29.306 [2024-12-16 13:24:43.753479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:29.306 [2024-12-16 13:24:43.753487] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.306 [2024-12-16 13:24:43.753497] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.306 [2024-12-16 13:24:43.753506] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.306 [2024-12-16 13:24:43.753514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.306 [2024-12-16 13:24:43.753522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.306 [2024-12-16 13:24:43.753528] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.306 [2024-12-16 13:24:43.753536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.306 [2024-12-16 13:24:43.753544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.306 [2024-12-16 13:24:43.753553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:20:29.307 [2024-12-16 13:24:43.753560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.775685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.775736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.307 [2024-12-16 13:24:43.775749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.079 ms 00:20:29.307 [2024-12-16 13:24:43.775765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.775864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.775872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.307 [2024-12-16 13:24:43.775881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:29.307 [2024-12-16 13:24:43.775890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.823614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.823684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.307 [2024-12-16 13:24:43.823697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.668 ms 00:20:29.307 [2024-12-16 13:24:43.823706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.823762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.823773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.307 [2024-12-16 13:24:43.823782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.307 [2024-12-16 13:24:43.823791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.824521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.824557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.307 [2024-12-16 13:24:43.824568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:20:29.307 [2024-12-16 13:24:43.824584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.824761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.824773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.307 [2024-12-16 13:24:43.824783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:29.307 [2024-12-16 13:24:43.824792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.844378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.844431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.307 [2024-12-16 13:24:43.844442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.557 ms 00:20:29.307 [2024-12-16 13:24:43.844451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.307 [2024-12-16 13:24:43.860309] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:29.307 [2024-12-16 13:24:43.860364] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.307 [2024-12-16 13:24:43.860377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.307 [2024-12-16 13:24:43.860386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.307 [2024-12-16 13:24:43.860397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.806 ms 00:20:29.307 [2024-12-16 13:24:43.860404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-16 13:24:43.887580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-16 13:24:43.887652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.567 [2024-12-16 13:24:43.887665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.115 ms 00:20:29.567 [2024-12-16 13:24:43.887674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-16 13:24:43.901386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-16 13:24:43.901437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.567 [2024-12-16 13:24:43.901450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.652 ms 00:20:29.567 [2024-12-16 13:24:43.901458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-16 13:24:43.914782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-16 13:24:43.914830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.567 [2024-12-16 13:24:43.914854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.271 ms 00:20:29.567 [2024-12-16 13:24:43.914862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-16 13:24:43.915276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-16 13:24:43.915290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.567 [2024-12-16 13:24:43.915301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:29.567 [2024-12-16 13:24:43.915309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-16 13:24:43.988617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.567 [2024-12-16 13:24:43.988706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.567 [2024-12-16 13:24:43.988722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.289 ms 00:20:29.567 [2024-12-16 13:24:43.988732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.567 [2024-12-16 13:24:44.001227] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:29.568 [2024-12-16 13:24:44.005448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.005497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.568 [2024-12-16 13:24:44.005511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.637 ms 00:20:29.568 [2024-12-16 13:24:44.005527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.005663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.005677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.568 [2024-12-16 13:24:44.005686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.568 [2024-12-16 13:24:44.005696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.007552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.007608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.568 [2024-12-16 13:24:44.007621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:20:29.568 [2024-12-16 13:24:44.007651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.009215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.009263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:29.568 [2024-12-16 13:24:44.009275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:20:29.568 [2024-12-16 13:24:44.009282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.009325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.009334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.568 [2024-12-16 13:24:44.009351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.568 [2024-12-16 13:24:44.009360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.009404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.568 [2024-12-16 13:24:44.009415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.009428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.568 [2024-12-16 13:24:44.009437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:29.568 [2024-12-16 13:24:44.009445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.037492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.037548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.568 [2024-12-16 13:24:44.037564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.025 ms 00:20:29.568 [2024-12-16 13:24:44.037573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.037697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.568 [2024-12-16 13:24:44.037708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.568 [2024-12-16 13:24:44.037719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:29.568 [2024-12-16 13:24:44.037728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.568 [2024-12-16 13:24:44.044138] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.581 ms, result 0 00:20:30.952  [2024-12-16T13:24:46.466Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-16T13:24:47.408Z] Copying: 32/1024 [MB] (19 MBps) [2024-12-16T13:24:48.354Z] Copying: 49/1024 [MB] (17 MBps) [2024-12-16T13:24:49.298Z] Copying: 59/1024 [MB] (10 MBps) [2024-12-16T13:24:50.242Z] Copying: 70/1024 [MB] (10 MBps) [2024-12-16T13:24:51.630Z] Copying: 81/1024 [MB] (10 MBps) [2024-12-16T13:24:52.574Z] Copying: 91/1024 [MB] (10 MBps) [2024-12-16T13:24:53.519Z] Copying: 103/1024 [MB] (12 MBps) [2024-12-16T13:24:54.464Z] Copying: 114/1024 [MB] (10 MBps) [2024-12-16T13:24:55.406Z] Copying: 124/1024 [MB] (10 MBps) [2024-12-16T13:24:56.347Z] Copying: 135/1024 [MB] (10 MBps) [2024-12-16T13:24:57.290Z] Copying: 149/1024 [MB] (14 MBps) [2024-12-16T13:24:58.674Z] Copying: 160/1024 [MB] (10 MBps) [2024-12-16T13:24:59.246Z] Copying: 171/1024 [MB] (11 MBps) [2024-12-16T13:25:00.248Z] Copying: 182/1024 [MB] (10 MBps) [2024-12-16T13:25:01.636Z] Copying: 193/1024 [MB] (10 MBps) [2024-12-16T13:25:02.584Z] Copying: 203/1024 [MB] (10 MBps) [2024-12-16T13:25:03.536Z] Copying: 213/1024 [MB] (10 MBps) [2024-12-16T13:25:04.480Z] Copying: 225/1024 [MB] (12 MBps) [2024-12-16T13:25:05.422Z] Copying: 236/1024 [MB] (10 MBps) [2024-12-16T13:25:06.365Z] Copying: 252/1024 [MB] (15 MBps) [2024-12-16T13:25:07.307Z] Copying: 275/1024 [MB] (22 MBps) [2024-12-16T13:25:08.252Z] Copying: 296/1024 [MB] (21 MBps) [2024-12-16T13:25:09.639Z] Copying: 312/1024 [MB] (15 MBps) [2024-12-16T13:25:10.582Z] Copying: 328/1024 [MB] (15 MBps) [2024-12-16T13:25:11.526Z] Copying: 348/1024 [MB] (20 MBps) [2024-12-16T13:25:12.469Z] Copying: 368/1024 [MB] (19 MBps) [2024-12-16T13:25:13.414Z] Copying: 383/1024 [MB] (15 MBps) [2024-12-16T13:25:14.359Z] Copying: 402/1024 [MB] (18 MBps) [2024-12-16T13:25:15.307Z] Copying: 419/1024 [MB] (16 MBps) [2024-12-16T13:25:16.248Z] Copying: 430/1024 [MB] (11 MBps) [2024-12-16T13:25:17.637Z] Copying: 449/1024 [MB] (18 MBps) [2024-12-16T13:25:18.583Z] Copying: 466/1024 [MB] (17 MBps) [2024-12-16T13:25:19.525Z] Copying: 479/1024 [MB] (12 MBps) [2024-12-16T13:25:20.462Z] Copying: 490/1024 [MB] (11 MBps) [2024-12-16T13:25:21.406Z] Copying: 504/1024 [MB] (13 MBps) [2024-12-16T13:25:22.350Z] Copying: 515/1024 [MB] (10 MBps) [2024-12-16T13:25:23.359Z] Copying: 526/1024 [MB] (10 MBps) [2024-12-16T13:25:24.295Z] Copying: 536/1024 [MB] (10 MBps) [2024-12-16T13:25:25.677Z] Copying: 550/1024 [MB] (13 MBps) [2024-12-16T13:25:26.243Z] Copying: 564/1024 [MB] (14 MBps) [2024-12-16T13:25:27.619Z] Copying: 578/1024 [MB] (14 MBps) [2024-12-16T13:25:28.554Z] Copying: 593/1024 [MB] (14 MBps) [2024-12-16T13:25:29.488Z] Copying: 607/1024 [MB] (14 MBps) [2024-12-16T13:25:30.422Z] Copying: 622/1024 [MB] (14 MBps) [2024-12-16T13:25:31.361Z] Copying: 635/1024 [MB] (13 MBps) [2024-12-16T13:25:32.295Z] Copying: 647/1024 [MB] (11 MBps) [2024-12-16T13:25:33.680Z] Copying: 662/1024 [MB] (14 MBps) [2024-12-16T13:25:34.252Z] Copying: 673/1024 [MB] (11 MBps) [2024-12-16T13:25:35.638Z] Copying: 687/1024 [MB] (13 MBps) [2024-12-16T13:25:36.581Z] Copying: 697/1024 [MB] (10 MBps) [2024-12-16T13:25:37.525Z] Copying: 708/1024 [MB] (10 MBps) [2024-12-16T13:25:38.468Z] Copying: 719/1024 [MB] (10 MBps) [2024-12-16T13:25:39.410Z] Copying: 729/1024 [MB] (10 MBps) [2024-12-16T13:25:40.353Z] Copying: 740/1024 [MB] (10 MBps) [2024-12-16T13:25:41.296Z] Copying: 753/1024 [MB] (13 MBps) [2024-12-16T13:25:42.239Z] Copying: 769/1024 [MB] (15 MBps) [2024-12-16T13:25:43.627Z] Copying: 788/1024 [MB] (19 MBps) [2024-12-16T13:25:44.571Z] Copying: 802/1024 [MB] (13 MBps) [2024-12-16T13:25:45.513Z] Copying: 817/1024 [MB] (15 MBps) [2024-12-16T13:25:46.511Z] Copying: 845/1024 [MB] (27 MBps) [2024-12-16T13:25:47.455Z] Copying: 867/1024 [MB] (22 MBps) [2024-12-16T13:25:48.398Z] Copying: 889/1024 [MB] (22 MBps) [2024-12-16T13:25:49.340Z] Copying: 907/1024 [MB] (17 MBps) [2024-12-16T13:25:50.283Z] Copying: 928/1024 [MB] (21 MBps) [2024-12-16T13:25:51.670Z] Copying: 948/1024 [MB] (19 MBps) [2024-12-16T13:25:52.242Z] Copying: 967/1024 [MB] (19 MBps) [2024-12-16T13:25:53.627Z] Copying: 985/1024 [MB] (17 MBps) [2024-12-16T13:25:54.570Z] Copying: 1003/1024 [MB] (17 MBps) [2024-12-16T13:25:55.142Z] Copying: 1015/1024 [MB] (12 MBps) [2024-12-16T13:25:56.528Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-16 13:25:56.207709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.207825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.954 [2024-12-16 13:25:56.207893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:41.954 [2024-12-16 13:25:56.207912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.954 [2024-12-16 13:25:56.207965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.954 [2024-12-16 13:25:56.214386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.214748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.954 [2024-12-16 13:25:56.214791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.386 ms 00:21:41.954 [2024-12-16 13:25:56.214809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.954 [2024-12-16 13:25:56.215380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.215405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.954 [2024-12-16 13:25:56.215432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:21:41.954 [2024-12-16 13:25:56.215449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.954 [2024-12-16 13:25:56.223680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.223723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.954 [2024-12-16 13:25:56.223735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.197 ms 00:21:41.954 [2024-12-16 13:25:56.223745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.954 [2024-12-16 13:25:56.229993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.230193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:41.954 [2024-12-16 13:25:56.230220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:21:41.954 [2024-12-16 13:25:56.230238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.954 [2024-12-16 13:25:56.260622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.260670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.954 [2024-12-16 13:25:56.260684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.320 ms 00:21:41.954 [2024-12-16 13:25:56.260694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.954 [2024-12-16 13:25:56.278200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.954 [2024-12-16 13:25:56.278241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.954 [2024-12-16 13:25:56.278255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.456 ms 00:21:41.954 [2024-12-16 13:25:56.278266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.216 [2024-12-16 13:25:56.659876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.216 [2024-12-16 13:25:56.659941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:42.216 [2024-12-16 13:25:56.659956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 381.551 ms 00:21:42.216 [2024-12-16 13:25:56.659965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.216 [2024-12-16 13:25:56.687459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.216 [2024-12-16 13:25:56.687510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:42.216 [2024-12-16 13:25:56.687523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.468 ms 00:21:42.216 [2024-12-16 13:25:56.687531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.216 [2024-12-16 13:25:56.713586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.216 [2024-12-16 13:25:56.713794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:42.216 [2024-12-16 13:25:56.713817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.007 ms 00:21:42.216 [2024-12-16 13:25:56.713840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.216 [2024-12-16 13:25:56.740022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.216 [2024-12-16 13:25:56.740079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:42.216 [2024-12-16 13:25:56.740095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.870 ms 00:21:42.216 [2024-12-16 13:25:56.740103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.216 [2024-12-16 13:25:56.765393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.216 [2024-12-16 13:25:56.765443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.216 [2024-12-16 13:25:56.765456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.183 ms 00:21:42.216 [2024-12-16 13:25:56.765463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.216 [2024-12-16 13:25:56.765510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.216 [2024-12-16 13:25:56.765530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:21:42.216 [2024-12-16 13:25:56.765541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.216 [2024-12-16 13:25:56.765920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.765992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.217 [2024-12-16 13:25:56.766401] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.217 [2024-12-16 13:25:56.766410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ed509dd-a3fe-4b22-bf76-2ded1e0e3da3 00:21:42.217 [2024-12-16 13:25:56.766418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:21:42.217 [2024-12-16 13:25:56.766426] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35008 00:21:42.217 [2024-12-16 13:25:56.766436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34048 00:21:42.217 [2024-12-16 13:25:56.766454] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0282 00:21:42.217 [2024-12-16 13:25:56.766462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.217 [2024-12-16 13:25:56.766471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.217 [2024-12-16 13:25:56.766479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.217 [2024-12-16 13:25:56.766486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.217 [2024-12-16 13:25:56.766500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.217 [2024-12-16 13:25:56.766508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.217 [2024-12-16 13:25:56.766517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.217 [2024-12-16 13:25:56.766526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:21:42.217 [2024-12-16 13:25:56.766535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.217 [2024-12-16 13:25:56.781269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.217 [2024-12-16 13:25:56.781320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.217 [2024-12-16 13:25:56.781332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.698 ms 00:21:42.217 [2024-12-16 13:25:56.781340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.217 [2024-12-16 13:25:56.781589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.217 [2024-12-16 13:25:56.781599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.217 [2024-12-16 13:25:56.781608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:42.217 [2024-12-16 13:25:56.781616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.823822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.824035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.478 [2024-12-16 13:25:56.824059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.824069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.824155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.824165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.478 [2024-12-16 13:25:56.824175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.824183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.824273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.824291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.478 [2024-12-16 13:25:56.824301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.824314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.824331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.824340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.478 [2024-12-16 13:25:56.824349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.824357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.911049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.911272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.478 [2024-12-16 13:25:56.911295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.911306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.946433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.946611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.478 [2024-12-16 13:25:56.946650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.946661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.946750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.946761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.478 [2024-12-16 13:25:56.946779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.946789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.946838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.946848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.478 [2024-12-16 13:25:56.946858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.946867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.946993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.947004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.478 [2024-12-16 13:25:56.947014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.947026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.947061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.947072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.478 [2024-12-16 13:25:56.947081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.947090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.947146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.947157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.478 [2024-12-16 13:25:56.947166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.947178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.947242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.478 [2024-12-16 13:25:56.947252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.478 [2024-12-16 13:25:56.947262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.478 [2024-12-16 13:25:56.947270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.478 [2024-12-16 13:25:56.947436] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 739.768 ms, result 0 00:21:43.419 00:21:43.419 00:21:43.419 13:25:57 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:45.963 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:45.963 13:26:00 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:45.963 13:26:00 -- ftl/restore.sh@85 -- # restore_kill 00:21:45.963 13:26:00 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:45.963 13:26:00 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:45.963 13:26:00 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:45.963 Process with pid 72616 is not found 00:21:45.963 Remove shared memory files 00:21:45.963 13:26:00 -- ftl/restore.sh@32 -- # killprocess 72616 00:21:45.963 13:26:00 -- common/autotest_common.sh@936 -- # '[' -z 72616 ']' 00:21:45.963 13:26:00 -- common/autotest_common.sh@940 -- # kill -0 72616 00:21:45.963 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72616) - No such process 00:21:45.963 13:26:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72616 is not found' 00:21:45.963 13:26:00 -- ftl/restore.sh@33 -- # remove_shm 00:21:45.963 13:26:00 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:45.963 13:26:00 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:45.963 13:26:00 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:45.963 13:26:00 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:45.963 13:26:00 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:45.963 13:26:00 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:45.963 ************************************ 00:21:45.963 END TEST ftl_restore 00:21:45.963 ************************************ 00:21:45.963 00:21:45.963 real 4m42.398s 00:21:45.963 user 4m28.123s 00:21:45.963 sys 0m13.142s 00:21:45.963 13:26:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:45.963 13:26:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.963 13:26:00 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:45.963 13:26:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:45.963 13:26:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:45.963 13:26:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.963 ************************************ 00:21:45.963 START TEST ftl_dirty_shutdown 00:21:45.963 ************************************ 00:21:45.963 13:26:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:45.963 * Looking for test storage... 00:21:45.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:45.963 13:26:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:45.963 13:26:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:45.963 13:26:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:45.963 13:26:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:45.963 13:26:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:45.963 13:26:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:45.963 13:26:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:45.963 13:26:00 -- scripts/common.sh@335 -- # IFS=.-: 00:21:45.963 13:26:00 -- scripts/common.sh@335 -- # read -ra ver1 00:21:45.963 13:26:00 -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.963 13:26:00 -- scripts/common.sh@336 -- # read -ra ver2 00:21:45.963 13:26:00 -- scripts/common.sh@337 -- # local 'op=<' 00:21:45.963 13:26:00 -- scripts/common.sh@339 -- # ver1_l=2 00:21:45.963 13:26:00 -- scripts/common.sh@340 -- # ver2_l=1 00:21:45.963 13:26:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:45.963 13:26:00 -- scripts/common.sh@343 -- # case "$op" in 00:21:45.963 13:26:00 -- scripts/common.sh@344 -- # : 1 00:21:45.963 13:26:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:45.963 13:26:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.963 13:26:00 -- scripts/common.sh@364 -- # decimal 1 00:21:45.963 13:26:00 -- scripts/common.sh@352 -- # local d=1 00:21:45.963 13:26:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.963 13:26:00 -- scripts/common.sh@354 -- # echo 1 00:21:45.963 13:26:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:45.963 13:26:00 -- scripts/common.sh@365 -- # decimal 2 00:21:45.963 13:26:00 -- scripts/common.sh@352 -- # local d=2 00:21:45.963 13:26:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.963 13:26:00 -- scripts/common.sh@354 -- # echo 2 00:21:45.963 13:26:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:45.963 13:26:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:45.963 13:26:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:45.963 13:26:00 -- scripts/common.sh@367 -- # return 0 00:21:45.963 13:26:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.963 13:26:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.963 --rc genhtml_branch_coverage=1 00:21:45.963 --rc genhtml_function_coverage=1 00:21:45.963 --rc genhtml_legend=1 00:21:45.963 --rc geninfo_all_blocks=1 00:21:45.963 --rc geninfo_unexecuted_blocks=1 00:21:45.963 00:21:45.963 ' 00:21:45.963 13:26:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.963 --rc genhtml_branch_coverage=1 00:21:45.963 --rc genhtml_function_coverage=1 00:21:45.963 --rc genhtml_legend=1 00:21:45.963 --rc geninfo_all_blocks=1 00:21:45.963 --rc geninfo_unexecuted_blocks=1 00:21:45.963 00:21:45.963 ' 00:21:45.963 13:26:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.963 --rc genhtml_branch_coverage=1 00:21:45.963 --rc genhtml_function_coverage=1 00:21:45.963 --rc genhtml_legend=1 00:21:45.963 --rc geninfo_all_blocks=1 00:21:45.963 --rc geninfo_unexecuted_blocks=1 00:21:45.963 00:21:45.963 ' 00:21:45.963 13:26:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.963 --rc genhtml_branch_coverage=1 00:21:45.963 --rc genhtml_function_coverage=1 00:21:45.963 --rc genhtml_legend=1 00:21:45.963 --rc geninfo_all_blocks=1 00:21:45.963 --rc geninfo_unexecuted_blocks=1 00:21:45.963 00:21:45.963 ' 00:21:45.963 13:26:00 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:45.963 13:26:00 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:45.963 13:26:00 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:45.963 13:26:00 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:45.963 13:26:00 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:45.963 13:26:00 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:45.963 13:26:00 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:45.963 13:26:00 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:45.963 13:26:00 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:45.963 13:26:00 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.963 13:26:00 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.963 13:26:00 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:45.963 13:26:00 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:45.963 13:26:00 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:45.963 13:26:00 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:45.963 13:26:00 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:45.963 13:26:00 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:45.963 13:26:00 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.963 13:26:00 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.963 13:26:00 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:45.964 13:26:00 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:45.964 13:26:00 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:45.964 13:26:00 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:45.964 13:26:00 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:45.964 13:26:00 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:45.964 13:26:00 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:45.964 13:26:00 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:45.964 13:26:00 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:45.964 13:26:00 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75651 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75651 00:21:45.964 13:26:00 -- common/autotest_common.sh@829 -- # '[' -z 75651 ']' 00:21:45.964 13:26:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.964 13:26:00 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:45.964 13:26:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.964 13:26:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.964 13:26:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.964 13:26:00 -- common/autotest_common.sh@10 -- # set +x 00:21:46.225 [2024-12-16 13:26:00.582317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:46.225 [2024-12-16 13:26:00.582742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75651 ] 00:21:46.225 [2024-12-16 13:26:00.739517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.487 [2024-12-16 13:26:01.019565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.487 [2024-12-16 13:26:01.020058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.873 13:26:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.873 13:26:02 -- common/autotest_common.sh@862 -- # return 0 00:21:47.873 13:26:02 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:21:47.873 13:26:02 -- ftl/common.sh@54 -- # local name=nvme0 00:21:47.873 13:26:02 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:47.873 13:26:02 -- ftl/common.sh@56 -- # local size=103424 00:21:47.873 13:26:02 -- ftl/common.sh@59 -- # local base_bdev 00:21:47.873 13:26:02 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:21:47.873 13:26:02 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:47.873 13:26:02 -- ftl/common.sh@62 -- # local base_size 00:21:47.873 13:26:02 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:47.873 13:26:02 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:21:47.873 13:26:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:47.873 13:26:02 -- common/autotest_common.sh@1369 -- # local bs 00:21:47.873 13:26:02 -- common/autotest_common.sh@1370 -- # local nb 00:21:47.873 13:26:02 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:48.135 13:26:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:48.135 { 00:21:48.135 "name": "nvme0n1", 00:21:48.135 "aliases": [ 00:21:48.135 "c6148bf2-d84e-40fd-992b-4fbe4cfc77cd" 00:21:48.135 ], 00:21:48.135 "product_name": "NVMe disk", 00:21:48.135 "block_size": 4096, 00:21:48.135 "num_blocks": 1310720, 00:21:48.135 "uuid": "c6148bf2-d84e-40fd-992b-4fbe4cfc77cd", 00:21:48.135 "assigned_rate_limits": { 00:21:48.135 "rw_ios_per_sec": 0, 00:21:48.135 "rw_mbytes_per_sec": 0, 00:21:48.135 "r_mbytes_per_sec": 0, 00:21:48.135 "w_mbytes_per_sec": 0 00:21:48.135 }, 00:21:48.135 "claimed": true, 00:21:48.135 "claim_type": "read_many_write_one", 00:21:48.135 "zoned": false, 00:21:48.135 "supported_io_types": { 00:21:48.135 "read": true, 00:21:48.135 "write": true, 00:21:48.135 "unmap": true, 00:21:48.135 "write_zeroes": true, 00:21:48.135 "flush": true, 00:21:48.135 "reset": true, 00:21:48.135 "compare": true, 00:21:48.135 "compare_and_write": false, 00:21:48.135 "abort": true, 00:21:48.135 "nvme_admin": true, 00:21:48.135 "nvme_io": true 00:21:48.135 }, 00:21:48.135 "driver_specific": { 00:21:48.135 "nvme": [ 00:21:48.135 { 00:21:48.135 "pci_address": "0000:00:07.0", 00:21:48.135 "trid": { 00:21:48.135 "trtype": "PCIe", 00:21:48.135 "traddr": "0000:00:07.0" 00:21:48.135 }, 00:21:48.135 "ctrlr_data": { 00:21:48.135 "cntlid": 0, 00:21:48.135 "vendor_id": "0x1b36", 00:21:48.135 "model_number": "QEMU NVMe Ctrl", 00:21:48.135 "serial_number": "12341", 00:21:48.135 "firmware_revision": "8.0.0", 00:21:48.135 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:48.135 "oacs": { 00:21:48.135 "security": 0, 00:21:48.135 "format": 1, 00:21:48.135 "firmware": 0, 00:21:48.135 "ns_manage": 1 00:21:48.135 }, 00:21:48.135 "multi_ctrlr": false, 00:21:48.135 "ana_reporting": false 00:21:48.135 }, 00:21:48.135 "vs": { 00:21:48.135 "nvme_version": "1.4" 00:21:48.135 }, 00:21:48.135 "ns_data": { 00:21:48.135 "id": 1, 00:21:48.135 "can_share": false 00:21:48.135 } 00:21:48.135 } 00:21:48.135 ], 00:21:48.135 "mp_policy": "active_passive" 00:21:48.135 } 00:21:48.135 } 00:21:48.135 ]' 00:21:48.135 13:26:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:48.135 13:26:02 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:48.135 13:26:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:48.135 13:26:02 -- common/autotest_common.sh@1373 -- # nb=1310720 00:21:48.135 13:26:02 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:21:48.135 13:26:02 -- common/autotest_common.sh@1377 -- # echo 5120 00:21:48.135 13:26:02 -- ftl/common.sh@63 -- # base_size=5120 00:21:48.135 13:26:02 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:48.135 13:26:02 -- ftl/common.sh@67 -- # clear_lvols 00:21:48.135 13:26:02 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:48.135 13:26:02 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:48.397 13:26:02 -- ftl/common.sh@28 -- # stores=8baf27b7-b438-434a-9c96-425185719972 00:21:48.397 13:26:02 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:48.397 13:26:02 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8baf27b7-b438-434a-9c96-425185719972 00:21:48.658 13:26:03 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:48.919 13:26:03 -- ftl/common.sh@68 -- # lvs=b02f95bf-349a-4d1e-abbf-472708657825 00:21:48.919 13:26:03 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b02f95bf-349a-4d1e-abbf-472708657825 00:21:49.180 13:26:03 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.180 13:26:03 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:21:49.180 13:26:03 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.180 13:26:03 -- ftl/common.sh@35 -- # local name=nvc0 00:21:49.180 13:26:03 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:49.180 13:26:03 -- ftl/common.sh@37 -- # local base_bdev=dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.180 13:26:03 -- ftl/common.sh@38 -- # local cache_size= 00:21:49.180 13:26:03 -- ftl/common.sh@41 -- # get_bdev_size dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.180 13:26:03 -- common/autotest_common.sh@1367 -- # local bdev_name=dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.180 13:26:03 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:49.180 13:26:03 -- common/autotest_common.sh@1369 -- # local bs 00:21:49.180 13:26:03 -- common/autotest_common.sh@1370 -- # local nb 00:21:49.180 13:26:03 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.180 13:26:03 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:49.180 { 00:21:49.180 "name": "dac3f9b2-e376-45da-992b-a051822bf6fe", 00:21:49.180 "aliases": [ 00:21:49.180 "lvs/nvme0n1p0" 00:21:49.180 ], 00:21:49.180 "product_name": "Logical Volume", 00:21:49.180 "block_size": 4096, 00:21:49.180 "num_blocks": 26476544, 00:21:49.180 "uuid": "dac3f9b2-e376-45da-992b-a051822bf6fe", 00:21:49.180 "assigned_rate_limits": { 00:21:49.180 "rw_ios_per_sec": 0, 00:21:49.180 "rw_mbytes_per_sec": 0, 00:21:49.180 "r_mbytes_per_sec": 0, 00:21:49.180 "w_mbytes_per_sec": 0 00:21:49.180 }, 00:21:49.180 "claimed": false, 00:21:49.180 "zoned": false, 00:21:49.180 "supported_io_types": { 00:21:49.180 "read": true, 00:21:49.180 "write": true, 00:21:49.180 "unmap": true, 00:21:49.180 "write_zeroes": true, 00:21:49.180 "flush": false, 00:21:49.180 "reset": true, 00:21:49.180 "compare": false, 00:21:49.180 "compare_and_write": false, 00:21:49.180 "abort": false, 00:21:49.180 "nvme_admin": false, 00:21:49.180 "nvme_io": false 00:21:49.180 }, 00:21:49.180 "driver_specific": { 00:21:49.180 "lvol": { 00:21:49.180 "lvol_store_uuid": "b02f95bf-349a-4d1e-abbf-472708657825", 00:21:49.180 "base_bdev": "nvme0n1", 00:21:49.180 "thin_provision": true, 00:21:49.180 "snapshot": false, 00:21:49.180 "clone": false, 00:21:49.180 "esnap_clone": false 00:21:49.180 } 00:21:49.180 } 00:21:49.180 } 00:21:49.180 ]' 00:21:49.180 13:26:03 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:49.441 13:26:03 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:49.441 13:26:03 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:49.441 13:26:03 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:49.441 13:26:03 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:49.441 13:26:03 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:49.441 13:26:03 -- ftl/common.sh@41 -- # local base_size=5171 00:21:49.441 13:26:03 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:49.441 13:26:03 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:21:49.702 13:26:04 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:49.702 13:26:04 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:49.702 13:26:04 -- ftl/common.sh@48 -- # get_bdev_size dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.702 13:26:04 -- common/autotest_common.sh@1367 -- # local bdev_name=dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.702 13:26:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:49.702 13:26:04 -- common/autotest_common.sh@1369 -- # local bs 00:21:49.702 13:26:04 -- common/autotest_common.sh@1370 -- # local nb 00:21:49.702 13:26:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.702 13:26:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:49.702 { 00:21:49.702 "name": "dac3f9b2-e376-45da-992b-a051822bf6fe", 00:21:49.702 "aliases": [ 00:21:49.702 "lvs/nvme0n1p0" 00:21:49.702 ], 00:21:49.702 "product_name": "Logical Volume", 00:21:49.702 "block_size": 4096, 00:21:49.702 "num_blocks": 26476544, 00:21:49.702 "uuid": "dac3f9b2-e376-45da-992b-a051822bf6fe", 00:21:49.702 "assigned_rate_limits": { 00:21:49.702 "rw_ios_per_sec": 0, 00:21:49.702 "rw_mbytes_per_sec": 0, 00:21:49.702 "r_mbytes_per_sec": 0, 00:21:49.702 "w_mbytes_per_sec": 0 00:21:49.702 }, 00:21:49.702 "claimed": false, 00:21:49.702 "zoned": false, 00:21:49.702 "supported_io_types": { 00:21:49.702 "read": true, 00:21:49.702 "write": true, 00:21:49.702 "unmap": true, 00:21:49.702 "write_zeroes": true, 00:21:49.702 "flush": false, 00:21:49.702 "reset": true, 00:21:49.702 "compare": false, 00:21:49.702 "compare_and_write": false, 00:21:49.702 "abort": false, 00:21:49.702 "nvme_admin": false, 00:21:49.702 "nvme_io": false 00:21:49.702 }, 00:21:49.702 "driver_specific": { 00:21:49.702 "lvol": { 00:21:49.702 "lvol_store_uuid": "b02f95bf-349a-4d1e-abbf-472708657825", 00:21:49.702 "base_bdev": "nvme0n1", 00:21:49.702 "thin_provision": true, 00:21:49.702 "snapshot": false, 00:21:49.702 "clone": false, 00:21:49.702 "esnap_clone": false 00:21:49.702 } 00:21:49.702 } 00:21:49.702 } 00:21:49.702 ]' 00:21:49.702 13:26:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:49.702 13:26:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:49.702 13:26:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:49.963 13:26:04 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:49.963 13:26:04 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:49.963 13:26:04 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:49.963 13:26:04 -- ftl/common.sh@48 -- # cache_size=5171 00:21:49.963 13:26:04 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:49.963 13:26:04 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:49.963 13:26:04 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.963 13:26:04 -- common/autotest_common.sh@1367 -- # local bdev_name=dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:49.963 13:26:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:49.963 13:26:04 -- common/autotest_common.sh@1369 -- # local bs 00:21:49.963 13:26:04 -- common/autotest_common.sh@1370 -- # local nb 00:21:49.963 13:26:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac3f9b2-e376-45da-992b-a051822bf6fe 00:21:50.224 13:26:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:50.224 { 00:21:50.224 "name": "dac3f9b2-e376-45da-992b-a051822bf6fe", 00:21:50.224 "aliases": [ 00:21:50.224 "lvs/nvme0n1p0" 00:21:50.224 ], 00:21:50.224 "product_name": "Logical Volume", 00:21:50.224 "block_size": 4096, 00:21:50.224 "num_blocks": 26476544, 00:21:50.224 "uuid": "dac3f9b2-e376-45da-992b-a051822bf6fe", 00:21:50.224 "assigned_rate_limits": { 00:21:50.224 "rw_ios_per_sec": 0, 00:21:50.224 "rw_mbytes_per_sec": 0, 00:21:50.224 "r_mbytes_per_sec": 0, 00:21:50.224 "w_mbytes_per_sec": 0 00:21:50.224 }, 00:21:50.224 "claimed": false, 00:21:50.224 "zoned": false, 00:21:50.224 "supported_io_types": { 00:21:50.224 "read": true, 00:21:50.224 "write": true, 00:21:50.224 "unmap": true, 00:21:50.224 "write_zeroes": true, 00:21:50.224 "flush": false, 00:21:50.224 "reset": true, 00:21:50.224 "compare": false, 00:21:50.224 "compare_and_write": false, 00:21:50.224 "abort": false, 00:21:50.224 "nvme_admin": false, 00:21:50.224 "nvme_io": false 00:21:50.224 }, 00:21:50.224 "driver_specific": { 00:21:50.224 "lvol": { 00:21:50.224 "lvol_store_uuid": "b02f95bf-349a-4d1e-abbf-472708657825", 00:21:50.224 "base_bdev": "nvme0n1", 00:21:50.224 "thin_provision": true, 00:21:50.224 "snapshot": false, 00:21:50.224 "clone": false, 00:21:50.224 "esnap_clone": false 00:21:50.224 } 00:21:50.224 } 00:21:50.224 } 00:21:50.224 ]' 00:21:50.224 13:26:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:50.224 13:26:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:50.224 13:26:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:50.224 13:26:04 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:50.224 13:26:04 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:50.224 13:26:04 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:50.224 13:26:04 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:50.224 13:26:04 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dac3f9b2-e376-45da-992b-a051822bf6fe --l2p_dram_limit 10' 00:21:50.224 13:26:04 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:50.224 13:26:04 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:21:50.224 13:26:04 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:50.224 13:26:04 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dac3f9b2-e376-45da-992b-a051822bf6fe --l2p_dram_limit 10 -c nvc0n1p0 00:21:50.486 [2024-12-16 13:26:04.934426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.934473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:50.486 [2024-12-16 13:26:04.934487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:50.486 [2024-12-16 13:26:04.934496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.934540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.934548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.486 [2024-12-16 13:26:04.934556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:50.486 [2024-12-16 13:26:04.934562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.934579] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:50.486 [2024-12-16 13:26:04.935134] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:50.486 [2024-12-16 13:26:04.935156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.935163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.486 [2024-12-16 13:26:04.935171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:21:50.486 [2024-12-16 13:26:04.935177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.935230] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 631ac7f5-b8f3-43af-8b2c-7356abbb6320 00:21:50.486 [2024-12-16 13:26:04.936527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.936550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:50.486 [2024-12-16 13:26:04.936560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:50.486 [2024-12-16 13:26:04.936569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.943481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.943512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.486 [2024-12-16 13:26:04.943519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:21:50.486 [2024-12-16 13:26:04.943528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.943599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.943609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.486 [2024-12-16 13:26:04.943615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:50.486 [2024-12-16 13:26:04.943635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.943675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.943687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:50.486 [2024-12-16 13:26:04.943693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:50.486 [2024-12-16 13:26:04.943702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.943722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:50.486 [2024-12-16 13:26:04.947086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.947112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.486 [2024-12-16 13:26:04.947121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.368 ms 00:21:50.486 [2024-12-16 13:26:04.947127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.947158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.947164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:50.486 [2024-12-16 13:26:04.947172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:50.486 [2024-12-16 13:26:04.947178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.947193] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:50.486 [2024-12-16 13:26:04.947288] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:50.486 [2024-12-16 13:26:04.947301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:50.486 [2024-12-16 13:26:04.947310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:50.486 [2024-12-16 13:26:04.947320] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947326] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947335] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:50.486 [2024-12-16 13:26:04.947348] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:50.486 [2024-12-16 13:26:04.947355] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:50.486 [2024-12-16 13:26:04.947360] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:50.486 [2024-12-16 13:26:04.947368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.947374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:50.486 [2024-12-16 13:26:04.947382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:21:50.486 [2024-12-16 13:26:04.947388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.947440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.486 [2024-12-16 13:26:04.947446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:50.486 [2024-12-16 13:26:04.947454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:50.486 [2024-12-16 13:26:04.947461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.486 [2024-12-16 13:26:04.947520] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:50.486 [2024-12-16 13:26:04.947528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:50.486 [2024-12-16 13:26:04.947536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:50.486 [2024-12-16 13:26:04.947554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:50.486 [2024-12-16 13:26:04.947572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.486 [2024-12-16 13:26:04.947584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:50.486 [2024-12-16 13:26:04.947590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:50.486 [2024-12-16 13:26:04.947597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.486 [2024-12-16 13:26:04.947602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:50.486 [2024-12-16 13:26:04.947609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:50.486 [2024-12-16 13:26:04.947615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:50.486 [2024-12-16 13:26:04.947642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:50.486 [2024-12-16 13:26:04.947649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:50.486 [2024-12-16 13:26:04.947664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:50.486 [2024-12-16 13:26:04.947670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:50.486 [2024-12-16 13:26:04.947682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:50.486 [2024-12-16 13:26:04.947700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:50.486 [2024-12-16 13:26:04.947717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:50.486 [2024-12-16 13:26:04.947738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:50.486 [2024-12-16 13:26:04.947750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:50.486 [2024-12-16 13:26:04.947755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:50.486 [2024-12-16 13:26:04.947761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.486 [2024-12-16 13:26:04.947767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:50.487 [2024-12-16 13:26:04.947774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:50.487 [2024-12-16 13:26:04.947779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.487 [2024-12-16 13:26:04.947785] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:50.487 [2024-12-16 13:26:04.947791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:50.487 [2024-12-16 13:26:04.947798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.487 [2024-12-16 13:26:04.947803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.487 [2024-12-16 13:26:04.947813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:50.487 [2024-12-16 13:26:04.947818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:50.487 [2024-12-16 13:26:04.947824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:50.487 [2024-12-16 13:26:04.947830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:50.487 [2024-12-16 13:26:04.947838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:50.487 [2024-12-16 13:26:04.947843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:50.487 [2024-12-16 13:26:04.947850] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:50.487 [2024-12-16 13:26:04.947859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.487 [2024-12-16 13:26:04.947867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:50.487 [2024-12-16 13:26:04.947873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:50.487 [2024-12-16 13:26:04.947887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:50.487 [2024-12-16 13:26:04.947893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:50.487 [2024-12-16 13:26:04.947900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:50.487 [2024-12-16 13:26:04.947905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:50.487 [2024-12-16 13:26:04.947912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:50.487 [2024-12-16 13:26:04.947918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:50.487 [2024-12-16 13:26:04.948198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:50.487 [2024-12-16 13:26:04.948204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:50.487 [2024-12-16 13:26:04.948211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:50.487 [2024-12-16 13:26:04.948217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:50.487 [2024-12-16 13:26:04.948227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:50.487 [2024-12-16 13:26:04.948232] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:50.487 [2024-12-16 13:26:04.948240] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.487 [2024-12-16 13:26:04.948246] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:50.487 [2024-12-16 13:26:04.948252] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:50.487 [2024-12-16 13:26:04.948258] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:50.487 [2024-12-16 13:26:04.948265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:50.487 [2024-12-16 13:26:04.948271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.948279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:50.487 [2024-12-16 13:26:04.948285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:21:50.487 [2024-12-16 13:26:04.948292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:04.962252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.962285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.487 [2024-12-16 13:26:04.962295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.918 ms 00:21:50.487 [2024-12-16 13:26:04.962303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:04.962374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.962385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:50.487 [2024-12-16 13:26:04.962394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:50.487 [2024-12-16 13:26:04.962402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:04.988956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.989084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.487 [2024-12-16 13:26:04.989098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.520 ms 00:21:50.487 [2024-12-16 13:26:04.989119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:04.989147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.989155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.487 [2024-12-16 13:26:04.989162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:50.487 [2024-12-16 13:26:04.989171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:04.989576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.989593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.487 [2024-12-16 13:26:04.989600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:21:50.487 [2024-12-16 13:26:04.989608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:04.989723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:04.989734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.487 [2024-12-16 13:26:04.989741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:50.487 [2024-12-16 13:26:04.989748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:05.003700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:05.003805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.487 [2024-12-16 13:26:05.003817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.936 ms 00:21:50.487 [2024-12-16 13:26:05.003825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.487 [2024-12-16 13:26:05.013997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:50.487 [2024-12-16 13:26:05.016943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.487 [2024-12-16 13:26:05.016968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:50.487 [2024-12-16 13:26:05.016979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.051 ms 00:21:50.487 [2024-12-16 13:26:05.016986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.747 [2024-12-16 13:26:05.088265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.747 [2024-12-16 13:26:05.088304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:50.747 [2024-12-16 13:26:05.088317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.253 ms 00:21:50.747 [2024-12-16 13:26:05.088324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.747 [2024-12-16 13:26:05.088361] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:21:50.747 [2024-12-16 13:26:05.088370] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:21:54.956 [2024-12-16 13:26:09.047326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.047759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:54.956 [2024-12-16 13:26:09.047798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3958.928 ms 00:21:54.956 [2024-12-16 13:26:09.047808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.048064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.048077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:54.956 [2024-12-16 13:26:09.048094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:21:54.956 [2024-12-16 13:26:09.048101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.069381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.069426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:54.956 [2024-12-16 13:26:09.069441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.217 ms 00:21:54.956 [2024-12-16 13:26:09.069449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.088657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.088691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:54.956 [2024-12-16 13:26:09.088707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.156 ms 00:21:54.956 [2024-12-16 13:26:09.088714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.089004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.089014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:54.956 [2024-12-16 13:26:09.089024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:21:54.956 [2024-12-16 13:26:09.089030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.144087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.144116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:54.956 [2024-12-16 13:26:09.144128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.013 ms 00:21:54.956 [2024-12-16 13:26:09.144135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.165675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.165709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:54.956 [2024-12-16 13:26:09.165721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.503 ms 00:21:54.956 [2024-12-16 13:26:09.165728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.167109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.167229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:54.956 [2024-12-16 13:26:09.167247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:21:54.956 [2024-12-16 13:26:09.167253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.186613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.186736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:54.956 [2024-12-16 13:26:09.186755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.326 ms 00:21:54.956 [2024-12-16 13:26:09.186762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.186799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.186806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:54.956 [2024-12-16 13:26:09.186815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:54.956 [2024-12-16 13:26:09.186820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.186891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.956 [2024-12-16 13:26:09.186899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:54.956 [2024-12-16 13:26:09.186907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:54.956 [2024-12-16 13:26:09.186913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.956 [2024-12-16 13:26:09.187831] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4253.001 ms, result 0 00:21:54.956 { 00:21:54.956 "name": "ftl0", 00:21:54.956 "uuid": "631ac7f5-b8f3-43af-8b2c-7356abbb6320" 00:21:54.956 } 00:21:54.956 13:26:09 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:54.956 13:26:09 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:54.956 13:26:09 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:54.956 13:26:09 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:54.956 13:26:09 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:55.218 /dev/nbd0 00:21:55.218 13:26:09 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:55.218 13:26:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:55.218 13:26:09 -- common/autotest_common.sh@867 -- # local i 00:21:55.218 13:26:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:55.218 13:26:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:55.218 13:26:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:55.218 13:26:09 -- common/autotest_common.sh@871 -- # break 00:21:55.218 13:26:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:55.218 13:26:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:55.218 13:26:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:55.218 1+0 records in 00:21:55.218 1+0 records out 00:21:55.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054191 s, 7.6 MB/s 00:21:55.218 13:26:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:55.218 13:26:09 -- common/autotest_common.sh@884 -- # size=4096 00:21:55.218 13:26:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:55.218 13:26:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:55.218 13:26:09 -- common/autotest_common.sh@887 -- # return 0 00:21:55.218 13:26:09 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:55.218 [2024-12-16 13:26:09.709482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:55.218 [2024-12-16 13:26:09.709609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75806 ] 00:21:55.507 [2024-12-16 13:26:09.863055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.812 [2024-12-16 13:26:10.127228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.198  [2024-12-16T13:26:12.715Z] Copying: 200/1024 [MB] (200 MBps) [2024-12-16T13:26:13.657Z] Copying: 457/1024 [MB] (257 MBps) [2024-12-16T13:26:14.599Z] Copying: 714/1024 [MB] (256 MBps) [2024-12-16T13:26:14.860Z] Copying: 960/1024 [MB] (245 MBps) [2024-12-16T13:26:15.433Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:22:00.859 00:22:00.859 13:26:15 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:02.772 13:26:17 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:02.772 [2024-12-16 13:26:17.145542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:02.772 [2024-12-16 13:26:17.146864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75882 ] 00:22:02.772 [2024-12-16 13:26:17.293732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.033 [2024-12-16 13:26:17.497549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.419  [2024-12-16T13:26:19.937Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-16T13:26:20.880Z] Copying: 37/1024 [MB] (19 MBps) [2024-12-16T13:26:21.823Z] Copying: 56/1024 [MB] (18 MBps) [2024-12-16T13:26:22.767Z] Copying: 81/1024 [MB] (25 MBps) [2024-12-16T13:26:24.152Z] Copying: 100/1024 [MB] (18 MBps) [2024-12-16T13:26:25.094Z] Copying: 118/1024 [MB] (18 MBps) [2024-12-16T13:26:26.038Z] Copying: 137/1024 [MB] (19 MBps) [2024-12-16T13:26:26.981Z] Copying: 155/1024 [MB] (17 MBps) [2024-12-16T13:26:27.924Z] Copying: 182/1024 [MB] (27 MBps) [2024-12-16T13:26:28.868Z] Copying: 202/1024 [MB] (19 MBps) [2024-12-16T13:26:29.814Z] Copying: 228/1024 [MB] (26 MBps) [2024-12-16T13:26:30.844Z] Copying: 242/1024 [MB] (13 MBps) [2024-12-16T13:26:31.848Z] Copying: 255/1024 [MB] (13 MBps) [2024-12-16T13:26:32.783Z] Copying: 267/1024 [MB] (11 MBps) [2024-12-16T13:26:34.157Z] Copying: 284/1024 [MB] (17 MBps) [2024-12-16T13:26:35.091Z] Copying: 303/1024 [MB] (19 MBps) [2024-12-16T13:26:36.023Z] Copying: 322/1024 [MB] (18 MBps) [2024-12-16T13:26:36.956Z] Copying: 350/1024 [MB] (27 MBps) [2024-12-16T13:26:37.892Z] Copying: 385/1024 [MB] (35 MBps) [2024-12-16T13:26:38.824Z] Copying: 420/1024 [MB] (35 MBps) [2024-12-16T13:26:39.757Z] Copying: 456/1024 [MB] (35 MBps) [2024-12-16T13:26:41.130Z] Copying: 483/1024 [MB] (27 MBps) [2024-12-16T13:26:42.064Z] Copying: 498/1024 [MB] (14 MBps) [2024-12-16T13:26:42.998Z] Copying: 514/1024 [MB] (15 MBps) [2024-12-16T13:26:43.932Z] Copying: 527/1024 [MB] (12 MBps) [2024-12-16T13:26:44.865Z] Copying: 547/1024 [MB] (20 MBps) [2024-12-16T13:26:45.799Z] Copying: 566/1024 [MB] (19 MBps) [2024-12-16T13:26:47.172Z] Copying: 588/1024 [MB] (21 MBps) [2024-12-16T13:26:47.738Z] Copying: 611/1024 [MB] (22 MBps) [2024-12-16T13:26:49.111Z] Copying: 628/1024 [MB] (17 MBps) [2024-12-16T13:26:50.043Z] Copying: 648/1024 [MB] (20 MBps) [2024-12-16T13:26:50.977Z] Copying: 665/1024 [MB] (16 MBps) [2024-12-16T13:26:51.910Z] Copying: 684/1024 [MB] (18 MBps) [2024-12-16T13:26:52.844Z] Copying: 703/1024 [MB] (18 MBps) [2024-12-16T13:26:53.778Z] Copying: 723/1024 [MB] (20 MBps) [2024-12-16T13:26:55.152Z] Copying: 738/1024 [MB] (15 MBps) [2024-12-16T13:26:56.084Z] Copying: 752/1024 [MB] (14 MBps) [2024-12-16T13:26:57.019Z] Copying: 766/1024 [MB] (14 MBps) [2024-12-16T13:26:57.953Z] Copying: 788/1024 [MB] (22 MBps) [2024-12-16T13:26:58.888Z] Copying: 809/1024 [MB] (20 MBps) [2024-12-16T13:26:59.822Z] Copying: 828/1024 [MB] (18 MBps) [2024-12-16T13:27:00.757Z] Copying: 861/1024 [MB] (33 MBps) [2024-12-16T13:27:02.133Z] Copying: 883/1024 [MB] (21 MBps) [2024-12-16T13:27:03.066Z] Copying: 900/1024 [MB] (16 MBps) [2024-12-16T13:27:04.001Z] Copying: 924/1024 [MB] (24 MBps) [2024-12-16T13:27:04.935Z] Copying: 944/1024 [MB] (19 MBps) [2024-12-16T13:27:05.867Z] Copying: 963/1024 [MB] (18 MBps) [2024-12-16T13:27:06.801Z] Copying: 983/1024 [MB] (20 MBps) [2024-12-16T13:27:07.459Z] Copying: 1010/1024 [MB] (27 MBps) [2024-12-16T13:27:08.410Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:22:53.836 00:22:53.836 13:27:08 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:53.836 13:27:08 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:53.836 13:27:08 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:53.836 [2024-12-16 13:27:08.388100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.836 [2024-12-16 13:27:08.388147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.836 [2024-12-16 13:27:08.388162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.836 [2024-12-16 13:27:08.388170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.836 [2024-12-16 13:27:08.388190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.836 [2024-12-16 13:27:08.390416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.836 [2024-12-16 13:27:08.390450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.836 [2024-12-16 13:27:08.390461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:22:53.836 [2024-12-16 13:27:08.390468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.836 [2024-12-16 13:27:08.392138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.836 [2024-12-16 13:27:08.392246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.836 [2024-12-16 13:27:08.392268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:22:53.836 [2024-12-16 13:27:08.392275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.836 [2024-12-16 13:27:08.407483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.836 [2024-12-16 13:27:08.407590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.836 [2024-12-16 13:27:08.407608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.187 ms 00:22:53.836 [2024-12-16 13:27:08.407615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.412459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.412483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:54.097 [2024-12-16 13:27:08.412493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.800 ms 00:22:54.097 [2024-12-16 13:27:08.412501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.431838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.431865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.097 [2024-12-16 13:27:08.431876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.278 ms 00:22:54.097 [2024-12-16 13:27:08.431882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.444140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.444169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.097 [2024-12-16 13:27:08.444181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:22:54.097 [2024-12-16 13:27:08.444188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.444296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.444305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.097 [2024-12-16 13:27:08.444314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:54.097 [2024-12-16 13:27:08.444320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.461974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.462080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:54.097 [2024-12-16 13:27:08.462095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.637 ms 00:22:54.097 [2024-12-16 13:27:08.462101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.479639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.479664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:54.097 [2024-12-16 13:27:08.479673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.511 ms 00:22:54.097 [2024-12-16 13:27:08.479679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.497018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.497117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.097 [2024-12-16 13:27:08.497132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:22:54.097 [2024-12-16 13:27:08.497137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.515132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.097 [2024-12-16 13:27:08.515158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.097 [2024-12-16 13:27:08.515168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.939 ms 00:22:54.097 [2024-12-16 13:27:08.515173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.097 [2024-12-16 13:27:08.515205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.097 [2024-12-16 13:27:08.515216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.097 [2024-12-16 13:27:08.515534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.098 [2024-12-16 13:27:08.515932] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.098 [2024-12-16 13:27:08.515939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 631ac7f5-b8f3-43af-8b2c-7356abbb6320 00:22:54.098 [2024-12-16 13:27:08.515946] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:54.098 [2024-12-16 13:27:08.515953] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:54.098 [2024-12-16 13:27:08.515959] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:54.098 [2024-12-16 13:27:08.515966] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:54.098 [2024-12-16 13:27:08.515972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.098 [2024-12-16 13:27:08.515979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.098 [2024-12-16 13:27:08.515985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.098 [2024-12-16 13:27:08.515992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.098 [2024-12-16 13:27:08.515997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.098 [2024-12-16 13:27:08.516015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.098 [2024-12-16 13:27:08.516021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.098 [2024-12-16 13:27:08.516029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:22:54.098 [2024-12-16 13:27:08.516035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.526366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.098 [2024-12-16 13:27:08.526390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.098 [2024-12-16 13:27:08.526400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.306 ms 00:22:54.098 [2024-12-16 13:27:08.526406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.526568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.098 [2024-12-16 13:27:08.526575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.098 [2024-12-16 13:27:08.526583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:22:54.098 [2024-12-16 13:27:08.526589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.563739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.563768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.098 [2024-12-16 13:27:08.563777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.563783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.563835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.563841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.098 [2024-12-16 13:27:08.563849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.563855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.563914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.563922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.098 [2024-12-16 13:27:08.563930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.563935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.563950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.563957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.098 [2024-12-16 13:27:08.563964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.563969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.626654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.626690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.098 [2024-12-16 13:27:08.626701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.626708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.650701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.650731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.098 [2024-12-16 13:27:08.650741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.650748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.098 [2024-12-16 13:27:08.650815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.098 [2024-12-16 13:27:08.650823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.098 [2024-12-16 13:27:08.650831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.098 [2024-12-16 13:27:08.650837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.099 [2024-12-16 13:27:08.650876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.099 [2024-12-16 13:27:08.650883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.099 [2024-12-16 13:27:08.650891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.099 [2024-12-16 13:27:08.650897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.099 [2024-12-16 13:27:08.650972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.099 [2024-12-16 13:27:08.650981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.099 [2024-12-16 13:27:08.650989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.099 [2024-12-16 13:27:08.650995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.099 [2024-12-16 13:27:08.651024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.099 [2024-12-16 13:27:08.651031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.099 [2024-12-16 13:27:08.651039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.099 [2024-12-16 13:27:08.651044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.099 [2024-12-16 13:27:08.651079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.099 [2024-12-16 13:27:08.651088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.099 [2024-12-16 13:27:08.651096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.099 [2024-12-16 13:27:08.651102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.099 [2024-12-16 13:27:08.651145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.099 [2024-12-16 13:27:08.651152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.099 [2024-12-16 13:27:08.651161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.099 [2024-12-16 13:27:08.651166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.099 [2024-12-16 13:27:08.651287] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.147 ms, result 0 00:22:54.099 true 00:22:54.359 13:27:08 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75651 00:22:54.359 13:27:08 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75651 00:22:54.359 13:27:08 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:54.359 [2024-12-16 13:27:08.739670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:54.359 [2024-12-16 13:27:08.739786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76426 ] 00:22:54.359 [2024-12-16 13:27:08.889119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.619 [2024-12-16 13:27:09.064586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.005  [2024-12-16T13:27:11.523Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-16T13:27:12.464Z] Copying: 511/1024 [MB] (256 MBps) [2024-12-16T13:27:13.407Z] Copying: 767/1024 [MB] (256 MBps) [2024-12-16T13:27:13.407Z] Copying: 1018/1024 [MB] (251 MBps) [2024-12-16T13:27:13.979Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:22:59.405 00:22:59.405 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75651 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:59.405 13:27:13 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:59.665 [2024-12-16 13:27:14.027090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:59.665 [2024-12-16 13:27:14.027433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76480 ] 00:22:59.666 [2024-12-16 13:27:14.176710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.927 [2024-12-16 13:27:14.348301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.187 [2024-12-16 13:27:14.576059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.187 [2024-12-16 13:27:14.576117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.187 [2024-12-16 13:27:14.636940] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:00.187 [2024-12-16 13:27:14.637432] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:00.187 [2024-12-16 13:27:14.638187] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:00.447 [2024-12-16 13:27:14.984985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.447 [2024-12-16 13:27:14.985045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:00.448 [2024-12-16 13:27:14.985061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:00.448 [2024-12-16 13:27:14.985070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:14.985124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:14.985134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.448 [2024-12-16 13:27:14.985146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:00.448 [2024-12-16 13:27:14.985154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:14.985174] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:00.448 [2024-12-16 13:27:14.985987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:00.448 [2024-12-16 13:27:14.986011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:14.986021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.448 [2024-12-16 13:27:14.986032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:23:00.448 [2024-12-16 13:27:14.986041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:14.988153] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:00.448 [2024-12-16 13:27:15.002716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.002767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:00.448 [2024-12-16 13:27:15.002781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.565 ms 00:23:00.448 [2024-12-16 13:27:15.002790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.002869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.002882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:00.448 [2024-12-16 13:27:15.002891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:00.448 [2024-12-16 13:27:15.002898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.013620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.013680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.448 [2024-12-16 13:27:15.013692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.644 ms 00:23:00.448 [2024-12-16 13:27:15.013701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.013810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.013820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.448 [2024-12-16 13:27:15.013830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:00.448 [2024-12-16 13:27:15.013839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.013895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.013905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:00.448 [2024-12-16 13:27:15.013913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:00.448 [2024-12-16 13:27:15.013920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.013951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.448 [2024-12-16 13:27:15.018572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.018613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.448 [2024-12-16 13:27:15.018634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.633 ms 00:23:00.448 [2024-12-16 13:27:15.018643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.018689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.018698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:00.448 [2024-12-16 13:27:15.018707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:00.448 [2024-12-16 13:27:15.018714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.018752] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:00.448 [2024-12-16 13:27:15.018777] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:00.448 [2024-12-16 13:27:15.018814] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:00.448 [2024-12-16 13:27:15.018833] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:00.448 [2024-12-16 13:27:15.018913] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:00.448 [2024-12-16 13:27:15.018924] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:00.448 [2024-12-16 13:27:15.018933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:00.448 [2024-12-16 13:27:15.018943] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:00.448 [2024-12-16 13:27:15.018952] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:00.448 [2024-12-16 13:27:15.018960] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:00.448 [2024-12-16 13:27:15.018968] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:00.448 [2024-12-16 13:27:15.018975] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:00.448 [2024-12-16 13:27:15.018982] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:00.448 [2024-12-16 13:27:15.018993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.019001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:00.448 [2024-12-16 13:27:15.019009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:23:00.448 [2024-12-16 13:27:15.019017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.019147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.448 [2024-12-16 13:27:15.019156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:00.448 [2024-12-16 13:27:15.019164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:23:00.448 [2024-12-16 13:27:15.019172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.448 [2024-12-16 13:27:15.019246] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:00.448 [2024-12-16 13:27:15.019256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:00.448 [2024-12-16 13:27:15.019268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.448 [2024-12-16 13:27:15.019276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:00.448 [2024-12-16 13:27:15.019291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:00.448 [2024-12-16 13:27:15.019304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:00.448 [2024-12-16 13:27:15.019311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.448 [2024-12-16 13:27:15.019326] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:00.448 [2024-12-16 13:27:15.019333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:00.448 [2024-12-16 13:27:15.019348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.448 [2024-12-16 13:27:15.019356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:00.448 [2024-12-16 13:27:15.019364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:00.448 [2024-12-16 13:27:15.019371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:00.448 [2024-12-16 13:27:15.019383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:00.448 [2024-12-16 13:27:15.019389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:00.448 [2024-12-16 13:27:15.019404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:00.448 [2024-12-16 13:27:15.019411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:00.448 [2024-12-16 13:27:15.019418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:00.448 [2024-12-16 13:27:15.019425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:00.448 [2024-12-16 13:27:15.019440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:00.448 [2024-12-16 13:27:15.019450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:00.448 [2024-12-16 13:27:15.019462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:00.448 [2024-12-16 13:27:15.019468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:00.448 [2024-12-16 13:27:15.019474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:00.448 [2024-12-16 13:27:15.019480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:00.448 [2024-12-16 13:27:15.019487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:00.449 [2024-12-16 13:27:15.019493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:00.449 [2024-12-16 13:27:15.019501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:00.722 [2024-12-16 13:27:15.019510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:00.722 [2024-12-16 13:27:15.019517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.722 [2024-12-16 13:27:15.019524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:00.722 [2024-12-16 13:27:15.019530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:00.722 [2024-12-16 13:27:15.019536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.722 [2024-12-16 13:27:15.019542] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:00.722 [2024-12-16 13:27:15.019550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:00.722 [2024-12-16 13:27:15.019559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.722 [2024-12-16 13:27:15.019566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.722 [2024-12-16 13:27:15.019574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:00.722 [2024-12-16 13:27:15.019581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:00.722 [2024-12-16 13:27:15.019588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:00.722 [2024-12-16 13:27:15.019594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:00.722 [2024-12-16 13:27:15.019600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:00.722 [2024-12-16 13:27:15.019607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:00.722 [2024-12-16 13:27:15.019614] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:00.722 [2024-12-16 13:27:15.019642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.722 [2024-12-16 13:27:15.019651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:00.722 [2024-12-16 13:27:15.019658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:00.722 [2024-12-16 13:27:15.019665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:00.722 [2024-12-16 13:27:15.019672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:00.722 [2024-12-16 13:27:15.019679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:00.722 [2024-12-16 13:27:15.019687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:00.722 [2024-12-16 13:27:15.019697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:00.722 [2024-12-16 13:27:15.019704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:00.722 [2024-12-16 13:27:15.019711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:00.722 [2024-12-16 13:27:15.019718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:00.722 [2024-12-16 13:27:15.019725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:00.722 [2024-12-16 13:27:15.019733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:00.722 [2024-12-16 13:27:15.019740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:00.722 [2024-12-16 13:27:15.019748] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:00.722 [2024-12-16 13:27:15.019758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.722 [2024-12-16 13:27:15.019771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:00.722 [2024-12-16 13:27:15.019780] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:00.722 [2024-12-16 13:27:15.019788] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:00.722 [2024-12-16 13:27:15.019797] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:00.722 [2024-12-16 13:27:15.019805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.722 [2024-12-16 13:27:15.019813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:00.722 [2024-12-16 13:27:15.019821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:23:00.722 [2024-12-16 13:27:15.019831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.722 [2024-12-16 13:27:15.040690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.722 [2024-12-16 13:27:15.040896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.722 [2024-12-16 13:27:15.040915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.820 ms 00:23:00.722 [2024-12-16 13:27:15.040925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.722 [2024-12-16 13:27:15.041025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.041034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:00.723 [2024-12-16 13:27:15.041044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:00.723 [2024-12-16 13:27:15.041052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.087234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.087431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.723 [2024-12-16 13:27:15.087454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.125 ms 00:23:00.723 [2024-12-16 13:27:15.087465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.087518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.087529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.723 [2024-12-16 13:27:15.087539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:00.723 [2024-12-16 13:27:15.087552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.088234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.088258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.723 [2024-12-16 13:27:15.088270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:23:00.723 [2024-12-16 13:27:15.088279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.088419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.088430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.723 [2024-12-16 13:27:15.088439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:23:00.723 [2024-12-16 13:27:15.088448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.107142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.107179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.723 [2024-12-16 13:27:15.107190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.668 ms 00:23:00.723 [2024-12-16 13:27:15.107199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.122594] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:00.723 [2024-12-16 13:27:15.122815] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:00.723 [2024-12-16 13:27:15.122836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.122845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:00.723 [2024-12-16 13:27:15.122855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.521 ms 00:23:00.723 [2024-12-16 13:27:15.122863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.149684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.149892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:00.723 [2024-12-16 13:27:15.149929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.503 ms 00:23:00.723 [2024-12-16 13:27:15.149939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.163539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.163576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:00.723 [2024-12-16 13:27:15.163589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:23:00.723 [2024-12-16 13:27:15.163606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.175521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.175553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:00.723 [2024-12-16 13:27:15.175564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.868 ms 00:23:00.723 [2024-12-16 13:27:15.175571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.175952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.175970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:00.723 [2024-12-16 13:27:15.175980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:00.723 [2024-12-16 13:27:15.175987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.238896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.238956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:00.723 [2024-12-16 13:27:15.238971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.888 ms 00:23:00.723 [2024-12-16 13:27:15.238980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.250758] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:00.723 [2024-12-16 13:27:15.254130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.254164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.723 [2024-12-16 13:27:15.254176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:23:00.723 [2024-12-16 13:27:15.254185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.254279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.254290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:00.723 [2024-12-16 13:27:15.254300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:00.723 [2024-12-16 13:27:15.254308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.254391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.254402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.723 [2024-12-16 13:27:15.254411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:00.723 [2024-12-16 13:27:15.254419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.255781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.255814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:00.723 [2024-12-16 13:27:15.255823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:23:00.723 [2024-12-16 13:27:15.255836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.255873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.255881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.723 [2024-12-16 13:27:15.255891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:00.723 [2024-12-16 13:27:15.255899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.255939] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:00.723 [2024-12-16 13:27:15.255950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.255957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:00.723 [2024-12-16 13:27:15.255965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:00.723 [2024-12-16 13:27:15.255972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.280916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.281106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.723 [2024-12-16 13:27:15.281127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:23:00.723 [2024-12-16 13:27:15.281135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.281216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.723 [2024-12-16 13:27:15.281226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.723 [2024-12-16 13:27:15.281235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:00.723 [2024-12-16 13:27:15.281244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.723 [2024-12-16 13:27:15.282808] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.297 ms, result 0 00:23:02.111  [2024-12-16T13:27:17.627Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-16T13:27:18.571Z] Copying: 54/1024 [MB] (29 MBps) [2024-12-16T13:27:19.515Z] Copying: 72/1024 [MB] (18 MBps) [2024-12-16T13:27:20.458Z] Copying: 90/1024 [MB] (17 MBps) [2024-12-16T13:27:21.401Z] Copying: 104/1024 [MB] (14 MBps) [2024-12-16T13:27:22.344Z] Copying: 125/1024 [MB] (20 MBps) [2024-12-16T13:27:23.730Z] Copying: 141/1024 [MB] (16 MBps) [2024-12-16T13:27:24.303Z] Copying: 160/1024 [MB] (18 MBps) [2024-12-16T13:27:25.691Z] Copying: 171/1024 [MB] (10 MBps) [2024-12-16T13:27:26.635Z] Copying: 188/1024 [MB] (17 MBps) [2024-12-16T13:27:27.580Z] Copying: 214/1024 [MB] (25 MBps) [2024-12-16T13:27:28.524Z] Copying: 231/1024 [MB] (16 MBps) [2024-12-16T13:27:29.465Z] Copying: 270/1024 [MB] (39 MBps) [2024-12-16T13:27:30.407Z] Copying: 305/1024 [MB] (34 MBps) [2024-12-16T13:27:31.349Z] Copying: 317/1024 [MB] (11 MBps) [2024-12-16T13:27:32.319Z] Copying: 332/1024 [MB] (15 MBps) [2024-12-16T13:27:33.708Z] Copying: 347/1024 [MB] (14 MBps) [2024-12-16T13:27:34.652Z] Copying: 366/1024 [MB] (19 MBps) [2024-12-16T13:27:35.596Z] Copying: 388/1024 [MB] (22 MBps) [2024-12-16T13:27:36.540Z] Copying: 406/1024 [MB] (17 MBps) [2024-12-16T13:27:37.484Z] Copying: 428/1024 [MB] (22 MBps) [2024-12-16T13:27:38.428Z] Copying: 445/1024 [MB] (16 MBps) [2024-12-16T13:27:39.369Z] Copying: 465/1024 [MB] (19 MBps) [2024-12-16T13:27:40.312Z] Copying: 481/1024 [MB] (16 MBps) [2024-12-16T13:27:41.698Z] Copying: 511/1024 [MB] (29 MBps) [2024-12-16T13:27:42.641Z] Copying: 524/1024 [MB] (13 MBps) [2024-12-16T13:27:43.587Z] Copying: 536/1024 [MB] (11 MBps) [2024-12-16T13:27:44.531Z] Copying: 554/1024 [MB] (18 MBps) [2024-12-16T13:27:45.474Z] Copying: 570/1024 [MB] (16 MBps) [2024-12-16T13:27:46.416Z] Copying: 602/1024 [MB] (32 MBps) [2024-12-16T13:27:47.362Z] Copying: 620/1024 [MB] (17 MBps) [2024-12-16T13:27:48.307Z] Copying: 630/1024 [MB] (10 MBps) [2024-12-16T13:27:49.694Z] Copying: 643/1024 [MB] (12 MBps) [2024-12-16T13:27:50.637Z] Copying: 663/1024 [MB] (20 MBps) [2024-12-16T13:27:51.582Z] Copying: 693/1024 [MB] (29 MBps) [2024-12-16T13:27:52.527Z] Copying: 704/1024 [MB] (11 MBps) [2024-12-16T13:27:53.472Z] Copying: 724/1024 [MB] (19 MBps) [2024-12-16T13:27:54.416Z] Copying: 741/1024 [MB] (16 MBps) [2024-12-16T13:27:55.361Z] Copying: 760/1024 [MB] (19 MBps) [2024-12-16T13:27:56.305Z] Copying: 777/1024 [MB] (16 MBps) [2024-12-16T13:27:57.692Z] Copying: 797/1024 [MB] (20 MBps) [2024-12-16T13:27:58.647Z] Copying: 824/1024 [MB] (27 MBps) [2024-12-16T13:27:59.592Z] Copying: 835/1024 [MB] (10 MBps) [2024-12-16T13:28:00.539Z] Copying: 854/1024 [MB] (18 MBps) [2024-12-16T13:28:01.484Z] Copying: 877/1024 [MB] (23 MBps) [2024-12-16T13:28:02.486Z] Copying: 892/1024 [MB] (14 MBps) [2024-12-16T13:28:03.455Z] Copying: 911/1024 [MB] (19 MBps) [2024-12-16T13:28:04.400Z] Copying: 940/1024 [MB] (28 MBps) [2024-12-16T13:28:05.344Z] Copying: 953/1024 [MB] (13 MBps) [2024-12-16T13:28:06.732Z] Copying: 974/1024 [MB] (21 MBps) [2024-12-16T13:28:07.304Z] Copying: 995/1024 [MB] (21 MBps) [2024-12-16T13:28:08.692Z] Copying: 1016/1024 [MB] (21 MBps) [2024-12-16T13:28:08.692Z] Copying: 1048424/1048576 [kB] (7328 kBps) [2024-12-16T13:28:08.692Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-16 13:28:08.439985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.440066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:54.118 [2024-12-16 13:28:08.440085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:54.118 [2024-12-16 13:28:08.440095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.118 [2024-12-16 13:28:08.444676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:54.118 [2024-12-16 13:28:08.448621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.448674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:54.118 [2024-12-16 13:28:08.448697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.881 ms 00:23:54.118 [2024-12-16 13:28:08.448706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.118 [2024-12-16 13:28:08.459573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.459621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:54.118 [2024-12-16 13:28:08.459653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.703 ms 00:23:54.118 [2024-12-16 13:28:08.459662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.118 [2024-12-16 13:28:08.483143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.483191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:54.118 [2024-12-16 13:28:08.483204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.461 ms 00:23:54.118 [2024-12-16 13:28:08.483212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.118 [2024-12-16 13:28:08.489371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.489414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:54.118 [2024-12-16 13:28:08.489428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:23:54.118 [2024-12-16 13:28:08.489437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.118 [2024-12-16 13:28:08.517394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.517592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:54.118 [2024-12-16 13:28:08.517614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.889 ms 00:23:54.118 [2024-12-16 13:28:08.517622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.118 [2024-12-16 13:28:08.534690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.118 [2024-12-16 13:28:08.534739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:54.118 [2024-12-16 13:28:08.534752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.006 ms 00:23:54.118 [2024-12-16 13:28:08.534760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.380 [2024-12-16 13:28:08.766046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.380 [2024-12-16 13:28:08.766106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:54.380 [2024-12-16 13:28:08.766120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 231.234 ms 00:23:54.380 [2024-12-16 13:28:08.766129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.380 [2024-12-16 13:28:08.792458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.380 [2024-12-16 13:28:08.792661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:54.380 [2024-12-16 13:28:08.792684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.307 ms 00:23:54.380 [2024-12-16 13:28:08.792692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.380 [2024-12-16 13:28:08.818916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.380 [2024-12-16 13:28:08.818964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:54.380 [2024-12-16 13:28:08.818976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.150 ms 00:23:54.380 [2024-12-16 13:28:08.818983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.380 [2024-12-16 13:28:08.844310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.380 [2024-12-16 13:28:08.844486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:54.380 [2024-12-16 13:28:08.844506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.284 ms 00:23:54.380 [2024-12-16 13:28:08.844513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.380 [2024-12-16 13:28:08.869696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.380 [2024-12-16 13:28:08.869739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:54.380 [2024-12-16 13:28:08.869752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.024 ms 00:23:54.380 [2024-12-16 13:28:08.869759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.380 [2024-12-16 13:28:08.869802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:54.380 [2024-12-16 13:28:08.869819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 94464 / 261120 wr_cnt: 1 state: open 00:23:54.380 [2024-12-16 13:28:08.869831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.869994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:54.380 [2024-12-16 13:28:08.870247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:54.381 [2024-12-16 13:28:08.870718] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:54.381 [2024-12-16 13:28:08.870746] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 631ac7f5-b8f3-43af-8b2c-7356abbb6320 00:23:54.381 [2024-12-16 13:28:08.870755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 94464 00:23:54.381 [2024-12-16 13:28:08.870764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95424 00:23:54.381 [2024-12-16 13:28:08.870780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 94464 00:23:54.381 [2024-12-16 13:28:08.870797] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:23:54.381 [2024-12-16 13:28:08.870806] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:54.381 [2024-12-16 13:28:08.870815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:54.381 [2024-12-16 13:28:08.870824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:54.381 [2024-12-16 13:28:08.870831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:54.381 [2024-12-16 13:28:08.870838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:54.381 [2024-12-16 13:28:08.870846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.381 [2024-12-16 13:28:08.870855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:54.381 [2024-12-16 13:28:08.870864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:23:54.381 [2024-12-16 13:28:08.870872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.381 [2024-12-16 13:28:08.885134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.381 [2024-12-16 13:28:08.885177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:54.381 [2024-12-16 13:28:08.885188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.224 ms 00:23:54.381 [2024-12-16 13:28:08.885197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.381 [2024-12-16 13:28:08.885445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.381 [2024-12-16 13:28:08.885455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:54.381 [2024-12-16 13:28:08.885472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:23:54.381 [2024-12-16 13:28:08.885481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.381 [2024-12-16 13:28:08.927583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.381 [2024-12-16 13:28:08.927665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:54.381 [2024-12-16 13:28:08.927678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.381 [2024-12-16 13:28:08.927688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.381 [2024-12-16 13:28:08.927756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.381 [2024-12-16 13:28:08.927766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:54.381 [2024-12-16 13:28:08.927781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.381 [2024-12-16 13:28:08.927790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.381 [2024-12-16 13:28:08.927872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.381 [2024-12-16 13:28:08.927884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:54.381 [2024-12-16 13:28:08.927893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.381 [2024-12-16 13:28:08.927902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.381 [2024-12-16 13:28:08.927920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.381 [2024-12-16 13:28:08.927929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:54.381 [2024-12-16 13:28:08.927938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.381 [2024-12-16 13:28:08.927951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.006746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.006797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:54.644 [2024-12-16 13:28:09.006808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.006816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:54.644 [2024-12-16 13:28:09.033199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.644 [2024-12-16 13:28:09.033293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.644 [2024-12-16 13:28:09.033352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.644 [2024-12-16 13:28:09.033471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:54.644 [2024-12-16 13:28:09.033522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.644 [2024-12-16 13:28:09.033591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.644 [2024-12-16 13:28:09.033684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.644 [2024-12-16 13:28:09.033690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.644 [2024-12-16 13:28:09.033697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.644 [2024-12-16 13:28:09.033831] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 594.441 ms, result 0 00:23:56.030 00:23:56.030 00:23:56.030 13:28:10 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:57.945 13:28:12 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:57.945 [2024-12-16 13:28:12.432396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.945 [2024-12-16 13:28:12.432482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77083 ] 00:23:58.206 [2024-12-16 13:28:12.573667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.206 [2024-12-16 13:28:12.745871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.468 [2024-12-16 13:28:12.972439] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.468 [2024-12-16 13:28:12.972493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.730 [2024-12-16 13:28:13.125331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.730 [2024-12-16 13:28:13.125382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:58.730 [2024-12-16 13:28:13.125397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:58.730 [2024-12-16 13:28:13.125408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.730 [2024-12-16 13:28:13.125457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.730 [2024-12-16 13:28:13.125467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.730 [2024-12-16 13:28:13.125476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:58.730 [2024-12-16 13:28:13.125483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.730 [2024-12-16 13:28:13.125503] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:58.730 [2024-12-16 13:28:13.126205] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:58.730 [2024-12-16 13:28:13.126228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.126236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.731 [2024-12-16 13:28:13.126244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:23:58.731 [2024-12-16 13:28:13.126252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.127816] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:58.731 [2024-12-16 13:28:13.141945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.141982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:58.731 [2024-12-16 13:28:13.141994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.130 ms 00:23:58.731 [2024-12-16 13:28:13.142002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.142059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.142069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:58.731 [2024-12-16 13:28:13.142077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:58.731 [2024-12-16 13:28:13.142089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.149878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.149911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.731 [2024-12-16 13:28:13.149921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.719 ms 00:23:58.731 [2024-12-16 13:28:13.149929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.150017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.150027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.731 [2024-12-16 13:28:13.150036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:58.731 [2024-12-16 13:28:13.150044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.150083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.150092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:58.731 [2024-12-16 13:28:13.150100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:58.731 [2024-12-16 13:28:13.150107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.150136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:58.731 [2024-12-16 13:28:13.154169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.154305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.731 [2024-12-16 13:28:13.154322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.046 ms 00:23:58.731 [2024-12-16 13:28:13.154330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.154374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.154382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:58.731 [2024-12-16 13:28:13.154390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:58.731 [2024-12-16 13:28:13.154400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.154445] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:58.731 [2024-12-16 13:28:13.154468] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:58.731 [2024-12-16 13:28:13.154502] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:58.731 [2024-12-16 13:28:13.154517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:58.731 [2024-12-16 13:28:13.154611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:58.731 [2024-12-16 13:28:13.154622] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:58.731 [2024-12-16 13:28:13.154653] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:58.731 [2024-12-16 13:28:13.154664] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:58.731 [2024-12-16 13:28:13.154674] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:58.731 [2024-12-16 13:28:13.154682] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:58.731 [2024-12-16 13:28:13.154689] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:58.731 [2024-12-16 13:28:13.154697] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:58.731 [2024-12-16 13:28:13.154705] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:58.731 [2024-12-16 13:28:13.154713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.154720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:58.731 [2024-12-16 13:28:13.154727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:58.731 [2024-12-16 13:28:13.154734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.154797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.731 [2024-12-16 13:28:13.154805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:58.731 [2024-12-16 13:28:13.154812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:58.731 [2024-12-16 13:28:13.154819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.731 [2024-12-16 13:28:13.154889] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:58.731 [2024-12-16 13:28:13.154899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:58.731 [2024-12-16 13:28:13.154907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.731 [2024-12-16 13:28:13.154915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.731 [2024-12-16 13:28:13.154923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:58.731 [2024-12-16 13:28:13.154930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:58.731 [2024-12-16 13:28:13.154938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:58.731 [2024-12-16 13:28:13.154945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:58.731 [2024-12-16 13:28:13.154953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:58.731 [2024-12-16 13:28:13.154961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.731 [2024-12-16 13:28:13.154969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:58.731 [2024-12-16 13:28:13.154976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:58.731 [2024-12-16 13:28:13.154983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.731 [2024-12-16 13:28:13.154990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:58.731 [2024-12-16 13:28:13.154997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:58.731 [2024-12-16 13:28:13.155003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:58.731 [2024-12-16 13:28:13.155024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:58.731 [2024-12-16 13:28:13.155031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:58.731 [2024-12-16 13:28:13.155045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:58.731 [2024-12-16 13:28:13.155051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:58.731 [2024-12-16 13:28:13.155058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:58.731 [2024-12-16 13:28:13.155065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:58.731 [2024-12-16 13:28:13.155079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:58.731 [2024-12-16 13:28:13.155085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:58.731 [2024-12-16 13:28:13.155098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:58.731 [2024-12-16 13:28:13.155104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:58.731 [2024-12-16 13:28:13.155117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:58.731 [2024-12-16 13:28:13.155124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:58.731 [2024-12-16 13:28:13.155137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:58.731 [2024-12-16 13:28:13.155143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.731 [2024-12-16 13:28:13.155156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:58.731 [2024-12-16 13:28:13.155162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:58.731 [2024-12-16 13:28:13.155169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.731 [2024-12-16 13:28:13.155175] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:58.731 [2024-12-16 13:28:13.155188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:58.731 [2024-12-16 13:28:13.155195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.731 [2024-12-16 13:28:13.155202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.731 [2024-12-16 13:28:13.155210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:58.731 [2024-12-16 13:28:13.155218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:58.731 [2024-12-16 13:28:13.155224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:58.731 [2024-12-16 13:28:13.155231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:58.731 [2024-12-16 13:28:13.155237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:58.731 [2024-12-16 13:28:13.155243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:58.732 [2024-12-16 13:28:13.155251] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:58.732 [2024-12-16 13:28:13.155260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.732 [2024-12-16 13:28:13.155268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:58.732 [2024-12-16 13:28:13.155276] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:58.732 [2024-12-16 13:28:13.155283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:58.732 [2024-12-16 13:28:13.155290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:58.732 [2024-12-16 13:28:13.155297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:58.732 [2024-12-16 13:28:13.155303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:58.732 [2024-12-16 13:28:13.155310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:58.732 [2024-12-16 13:28:13.155317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:58.732 [2024-12-16 13:28:13.155323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:58.732 [2024-12-16 13:28:13.155330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:58.732 [2024-12-16 13:28:13.155337] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:58.732 [2024-12-16 13:28:13.155344] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:58.732 [2024-12-16 13:28:13.155351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:58.732 [2024-12-16 13:28:13.155360] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:58.732 [2024-12-16 13:28:13.155368] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.732 [2024-12-16 13:28:13.155377] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:58.732 [2024-12-16 13:28:13.155384] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:58.732 [2024-12-16 13:28:13.155391] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:58.732 [2024-12-16 13:28:13.155398] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:58.732 [2024-12-16 13:28:13.155407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.155415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:58.732 [2024-12-16 13:28:13.155423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:23:58.732 [2024-12-16 13:28:13.155430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.173644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.173773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.732 [2024-12-16 13:28:13.173826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:23:58.732 [2024-12-16 13:28:13.173856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.173960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.173982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:58.732 [2024-12-16 13:28:13.174001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:58.732 [2024-12-16 13:28:13.174020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.219505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.219716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.732 [2024-12-16 13:28:13.220062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.425 ms 00:23:58.732 [2024-12-16 13:28:13.220117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.220182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.220211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.732 [2024-12-16 13:28:13.220233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.732 [2024-12-16 13:28:13.220253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.220940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.221076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.732 [2024-12-16 13:28:13.221149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:23:58.732 [2024-12-16 13:28:13.221181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.221339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.221695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.732 [2024-12-16 13:28:13.221753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:58.732 [2024-12-16 13:28:13.221841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.240581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.240782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.732 [2024-12-16 13:28:13.240843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.632 ms 00:23:58.732 [2024-12-16 13:28:13.240866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.256386] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:58.732 [2024-12-16 13:28:13.256562] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:58.732 [2024-12-16 13:28:13.256640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.256663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:58.732 [2024-12-16 13:28:13.256685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.634 ms 00:23:58.732 [2024-12-16 13:28:13.256704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.283340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.283541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:58.732 [2024-12-16 13:28:13.283616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.284 ms 00:23:58.732 [2024-12-16 13:28:13.283668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.732 [2024-12-16 13:28:13.296941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.732 [2024-12-16 13:28:13.297130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:58.732 [2024-12-16 13:28:13.297190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:23:58.732 [2024-12-16 13:28:13.297213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.310129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.310302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:58.995 [2024-12-16 13:28:13.310379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.471 ms 00:23:58.995 [2024-12-16 13:28:13.310403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.311200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.311346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:58.995 [2024-12-16 13:28:13.311454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:23:58.995 [2024-12-16 13:28:13.311481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.378744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.378791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:58.995 [2024-12-16 13:28:13.378806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.186 ms 00:23:58.995 [2024-12-16 13:28:13.378814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.389994] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:58.995 [2024-12-16 13:28:13.393272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.393306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:58.995 [2024-12-16 13:28:13.393321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.410 ms 00:23:58.995 [2024-12-16 13:28:13.393329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.393402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.393412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:58.995 [2024-12-16 13:28:13.393421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:58.995 [2024-12-16 13:28:13.393428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.394726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.394757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:58.995 [2024-12-16 13:28:13.394767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:23:58.995 [2024-12-16 13:28:13.394779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.396040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.396071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:58.995 [2024-12-16 13:28:13.396080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:23:58.995 [2024-12-16 13:28:13.396088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.396121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.396134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:58.995 [2024-12-16 13:28:13.396142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:58.995 [2024-12-16 13:28:13.396149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.396187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:58.995 [2024-12-16 13:28:13.396196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.396204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:58.995 [2024-12-16 13:28:13.396212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:58.995 [2024-12-16 13:28:13.396220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.420184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.420318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:58.995 [2024-12-16 13:28:13.420365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.946 ms 00:23:58.995 [2024-12-16 13:28:13.420394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.420489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.995 [2024-12-16 13:28:13.420534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:58.995 [2024-12-16 13:28:13.420575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:58.995 [2024-12-16 13:28:13.420615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.995 [2024-12-16 13:28:13.427188] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.422 ms, result 0 00:24:00.383  [2024-12-16T13:28:15.903Z] Copying: 1168/1048576 [kB] (1168 kBps) [2024-12-16T13:28:16.847Z] Copying: 5964/1048576 [kB] (4796 kBps) [2024-12-16T13:28:17.790Z] Copying: 32/1024 [MB] (26 MBps) [2024-12-16T13:28:18.731Z] Copying: 53/1024 [MB] (21 MBps) [2024-12-16T13:28:19.675Z] Copying: 81/1024 [MB] (27 MBps) [2024-12-16T13:28:20.618Z] Copying: 111/1024 [MB] (29 MBps) [2024-12-16T13:28:22.003Z] Copying: 141/1024 [MB] (30 MBps) [2024-12-16T13:28:22.947Z] Copying: 171/1024 [MB] (30 MBps) [2024-12-16T13:28:23.890Z] Copying: 200/1024 [MB] (28 MBps) [2024-12-16T13:28:24.832Z] Copying: 228/1024 [MB] (28 MBps) [2024-12-16T13:28:25.773Z] Copying: 251/1024 [MB] (22 MBps) [2024-12-16T13:28:26.714Z] Copying: 276/1024 [MB] (24 MBps) [2024-12-16T13:28:27.656Z] Copying: 302/1024 [MB] (26 MBps) [2024-12-16T13:28:29.041Z] Copying: 332/1024 [MB] (30 MBps) [2024-12-16T13:28:29.612Z] Copying: 384/1024 [MB] (51 MBps) [2024-12-16T13:28:30.626Z] Copying: 403/1024 [MB] (18 MBps) [2024-12-16T13:28:32.029Z] Copying: 419/1024 [MB] (16 MBps) [2024-12-16T13:28:32.971Z] Copying: 438/1024 [MB] (18 MBps) [2024-12-16T13:28:33.914Z] Copying: 469/1024 [MB] (31 MBps) [2024-12-16T13:28:34.859Z] Copying: 493/1024 [MB] (24 MBps) [2024-12-16T13:28:35.801Z] Copying: 526/1024 [MB] (33 MBps) [2024-12-16T13:28:36.745Z] Copying: 553/1024 [MB] (26 MBps) [2024-12-16T13:28:37.689Z] Copying: 580/1024 [MB] (26 MBps) [2024-12-16T13:28:38.633Z] Copying: 608/1024 [MB] (28 MBps) [2024-12-16T13:28:40.018Z] Copying: 635/1024 [MB] (26 MBps) [2024-12-16T13:28:40.963Z] Copying: 664/1024 [MB] (28 MBps) [2024-12-16T13:28:41.907Z] Copying: 708/1024 [MB] (44 MBps) [2024-12-16T13:28:42.852Z] Copying: 733/1024 [MB] (25 MBps) [2024-12-16T13:28:43.795Z] Copying: 761/1024 [MB] (28 MBps) [2024-12-16T13:28:44.738Z] Copying: 788/1024 [MB] (26 MBps) [2024-12-16T13:28:45.681Z] Copying: 819/1024 [MB] (31 MBps) [2024-12-16T13:28:46.625Z] Copying: 841/1024 [MB] (21 MBps) [2024-12-16T13:28:48.009Z] Copying: 872/1024 [MB] (30 MBps) [2024-12-16T13:28:48.951Z] Copying: 899/1024 [MB] (26 MBps) [2024-12-16T13:28:49.894Z] Copying: 937/1024 [MB] (37 MBps) [2024-12-16T13:28:50.838Z] Copying: 963/1024 [MB] (25 MBps) [2024-12-16T13:28:51.780Z] Copying: 979/1024 [MB] (15 MBps) [2024-12-16T13:28:52.725Z] Copying: 994/1024 [MB] (15 MBps) [2024-12-16T13:28:53.670Z] Copying: 1010/1024 [MB] (15 MBps) [2024-12-16T13:28:53.932Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-16 13:28:53.879310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.358 [2024-12-16 13:28:53.879431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:39.358 [2024-12-16 13:28:53.879451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:39.358 [2024-12-16 13:28:53.879461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.358 [2024-12-16 13:28:53.879495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:39.358 [2024-12-16 13:28:53.882996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.358 [2024-12-16 13:28:53.883047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:39.358 [2024-12-16 13:28:53.883060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:24:39.358 [2024-12-16 13:28:53.883077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.358 [2024-12-16 13:28:53.883384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.358 [2024-12-16 13:28:53.883398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:39.358 [2024-12-16 13:28:53.883408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:39.358 [2024-12-16 13:28:53.883417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.358 [2024-12-16 13:28:53.900866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.358 [2024-12-16 13:28:53.900927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:39.358 [2024-12-16 13:28:53.900942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.428 ms 00:24:39.358 [2024-12-16 13:28:53.900952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.358 [2024-12-16 13:28:53.907641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.358 [2024-12-16 13:28:53.907844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:39.358 [2024-12-16 13:28:53.907868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.634 ms 00:24:39.358 [2024-12-16 13:28:53.907878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:53.936482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:53.936536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:39.622 [2024-12-16 13:28:53.936551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.515 ms 00:24:39.622 [2024-12-16 13:28:53.936560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:53.953827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:53.953880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:39.622 [2024-12-16 13:28:53.953894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.217 ms 00:24:39.622 [2024-12-16 13:28:53.953903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:53.963141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:53.963316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:39.622 [2024-12-16 13:28:53.963347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.181 ms 00:24:39.622 [2024-12-16 13:28:53.963357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:53.990375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:53.990559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:39.622 [2024-12-16 13:28:53.990582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.990 ms 00:24:39.622 [2024-12-16 13:28:53.990590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:54.016367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:54.016417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:39.622 [2024-12-16 13:28:54.016430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.614 ms 00:24:39.622 [2024-12-16 13:28:54.016449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:54.041925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:54.041975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:39.622 [2024-12-16 13:28:54.041987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.428 ms 00:24:39.622 [2024-12-16 13:28:54.041995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:54.066813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.622 [2024-12-16 13:28:54.066863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:39.622 [2024-12-16 13:28:54.066875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.714 ms 00:24:39.622 [2024-12-16 13:28:54.066883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.622 [2024-12-16 13:28:54.066929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:39.622 [2024-12-16 13:28:54.066948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:39.622 [2024-12-16 13:28:54.066960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:39.622 [2024-12-16 13:28:54.066970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.066979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.066988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.066996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:39.622 [2024-12-16 13:28:54.067382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:39.623 [2024-12-16 13:28:54.067809] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:39.623 [2024-12-16 13:28:54.067819] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 631ac7f5-b8f3-43af-8b2c-7356abbb6320 00:24:39.623 [2024-12-16 13:28:54.067834] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:39.623 [2024-12-16 13:28:54.067842] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 171968 00:24:39.623 [2024-12-16 13:28:54.067851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 169984 00:24:39.623 [2024-12-16 13:28:54.067860] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0117 00:24:39.623 [2024-12-16 13:28:54.067868] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:39.623 [2024-12-16 13:28:54.067877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:39.623 [2024-12-16 13:28:54.067886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:39.623 [2024-12-16 13:28:54.067894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:39.623 [2024-12-16 13:28:54.067908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:39.623 [2024-12-16 13:28:54.067916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.623 [2024-12-16 13:28:54.067924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:39.623 [2024-12-16 13:28:54.067932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:24:39.623 [2024-12-16 13:28:54.067940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.623 [2024-12-16 13:28:54.082617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.623 [2024-12-16 13:28:54.082670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:39.623 [2024-12-16 13:28:54.082683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.641 ms 00:24:39.623 [2024-12-16 13:28:54.082703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.623 [2024-12-16 13:28:54.082956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.623 [2024-12-16 13:28:54.082966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:39.623 [2024-12-16 13:28:54.082975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:24:39.623 [2024-12-16 13:28:54.082991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.623 [2024-12-16 13:28:54.125609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.623 [2024-12-16 13:28:54.125672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.623 [2024-12-16 13:28:54.125685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.623 [2024-12-16 13:28:54.125694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.623 [2024-12-16 13:28:54.125752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.623 [2024-12-16 13:28:54.125764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.623 [2024-12-16 13:28:54.125774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.623 [2024-12-16 13:28:54.125789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.623 [2024-12-16 13:28:54.125867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.623 [2024-12-16 13:28:54.125879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.623 [2024-12-16 13:28:54.125888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.623 [2024-12-16 13:28:54.125896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.623 [2024-12-16 13:28:54.125913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.623 [2024-12-16 13:28:54.125923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.623 [2024-12-16 13:28:54.125931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.623 [2024-12-16 13:28:54.125940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.215592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.215673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.885 [2024-12-16 13:28:54.215688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.215697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.250592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.250671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.885 [2024-12-16 13:28:54.250684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.250713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.250822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.250835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.885 [2024-12-16 13:28:54.250844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.250853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.250898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.250909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.885 [2024-12-16 13:28:54.250918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.250928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.251043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.251055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.885 [2024-12-16 13:28:54.251064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.251073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.251114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.251124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:39.885 [2024-12-16 13:28:54.251133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.251141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.251198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.251208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.885 [2024-12-16 13:28:54.251217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.251226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.251286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.885 [2024-12-16 13:28:54.251296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.885 [2024-12-16 13:28:54.251305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.885 [2024-12-16 13:28:54.251314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.885 [2024-12-16 13:28:54.251479] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.127 ms, result 0 00:24:40.828 00:24:40.828 00:24:40.828 13:28:55 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:42.742 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:42.742 13:28:56 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:42.742 [2024-12-16 13:28:56.988899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:42.742 [2024-12-16 13:28:56.988993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77541 ] 00:24:42.742 [2024-12-16 13:28:57.133959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.004 [2024-12-16 13:28:57.361900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.264 [2024-12-16 13:28:57.686646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.264 [2024-12-16 13:28:57.686759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.527 [2024-12-16 13:28:57.847925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.847996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.527 [2024-12-16 13:28:57.848013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:43.527 [2024-12-16 13:28:57.848026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.848089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.848100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.527 [2024-12-16 13:28:57.848110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:43.527 [2024-12-16 13:28:57.848118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.848140] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.527 [2024-12-16 13:28:57.848955] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.527 [2024-12-16 13:28:57.848977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.848986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.527 [2024-12-16 13:28:57.848997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:24:43.527 [2024-12-16 13:28:57.849006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.851422] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:43.527 [2024-12-16 13:28:57.866742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.866797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:43.527 [2024-12-16 13:28:57.866813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.325 ms 00:24:43.527 [2024-12-16 13:28:57.866822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.866912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.866923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:43.527 [2024-12-16 13:28:57.866933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:43.527 [2024-12-16 13:28:57.866941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.878947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.878997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.527 [2024-12-16 13:28:57.879010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.921 ms 00:24:43.527 [2024-12-16 13:28:57.879018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.879134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-12-16 13:28:57.879145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.527 [2024-12-16 13:28:57.879154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:43.527 [2024-12-16 13:28:57.879162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-12-16 13:28:57.879228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.528 [2024-12-16 13:28:57.879237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.528 [2024-12-16 13:28:57.879246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:43.528 [2024-12-16 13:28:57.879254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.528 [2024-12-16 13:28:57.879290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.528 [2024-12-16 13:28:57.884153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.528 [2024-12-16 13:28:57.884198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.528 [2024-12-16 13:28:57.884209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:24:43.528 [2024-12-16 13:28:57.884217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.528 [2024-12-16 13:28:57.884263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.528 [2024-12-16 13:28:57.884272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.528 [2024-12-16 13:28:57.884281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:43.528 [2024-12-16 13:28:57.884294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.528 [2024-12-16 13:28:57.884336] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:43.528 [2024-12-16 13:28:57.884363] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:43.528 [2024-12-16 13:28:57.884402] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:43.528 [2024-12-16 13:28:57.884418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:43.528 [2024-12-16 13:28:57.884500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:43.528 [2024-12-16 13:28:57.884511] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.528 [2024-12-16 13:28:57.884526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:43.528 [2024-12-16 13:28:57.884538] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.528 [2024-12-16 13:28:57.884547] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.528 [2024-12-16 13:28:57.884556] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.528 [2024-12-16 13:28:57.884564] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.528 [2024-12-16 13:28:57.884572] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:43.528 [2024-12-16 13:28:57.884580] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:43.528 [2024-12-16 13:28:57.884588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.528 [2024-12-16 13:28:57.884596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.528 [2024-12-16 13:28:57.884604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:24:43.528 [2024-12-16 13:28:57.884611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.528 [2024-12-16 13:28:57.884706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.528 [2024-12-16 13:28:57.884716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.528 [2024-12-16 13:28:57.884724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:43.528 [2024-12-16 13:28:57.884732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.528 [2024-12-16 13:28:57.884807] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.528 [2024-12-16 13:28:57.884819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.528 [2024-12-16 13:28:57.884828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.528 [2024-12-16 13:28:57.884837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.528 [2024-12-16 13:28:57.884845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.528 [2024-12-16 13:28:57.884852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.528 [2024-12-16 13:28:57.884859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.528 [2024-12-16 13:28:57.884866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.528 [2024-12-16 13:28:57.884874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.528 [2024-12-16 13:28:57.884881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.528 [2024-12-16 13:28:57.884888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.528 [2024-12-16 13:28:57.884895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.528 [2024-12-16 13:28:57.884905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.528 [2024-12-16 13:28:57.884914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.528 [2024-12-16 13:28:57.884922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:43.528 [2024-12-16 13:28:57.884928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.528 [2024-12-16 13:28:57.884944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.528 [2024-12-16 13:28:57.884951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:43.528 [2024-12-16 13:28:57.884958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.528 [2024-12-16 13:28:57.884965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:43.528 [2024-12-16 13:28:57.884972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:43.528 [2024-12-16 13:28:57.884979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:43.528 [2024-12-16 13:28:57.884986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.528 [2024-12-16 13:28:57.884993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.528 [2024-12-16 13:28:57.885000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.528 [2024-12-16 13:28:57.885007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.528 [2024-12-16 13:28:57.885014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:43.528 [2024-12-16 13:28:57.885021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.528 [2024-12-16 13:28:57.885028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.528 [2024-12-16 13:28:57.885036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.528 [2024-12-16 13:28:57.885043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.528 [2024-12-16 13:28:57.885049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.528 [2024-12-16 13:28:57.885056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:43.528 [2024-12-16 13:28:57.885062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.528 [2024-12-16 13:28:57.885069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.528 [2024-12-16 13:28:57.885075] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.528 [2024-12-16 13:28:57.885084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.528 [2024-12-16 13:28:57.885092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.528 [2024-12-16 13:28:57.885099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:43.528 [2024-12-16 13:28:57.885108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.528 [2024-12-16 13:28:57.885114] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.528 [2024-12-16 13:28:57.885126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.528 [2024-12-16 13:28:57.885133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.528 [2024-12-16 13:28:57.885141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.528 [2024-12-16 13:28:57.885151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.528 [2024-12-16 13:28:57.885159] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.528 [2024-12-16 13:28:57.885165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.528 [2024-12-16 13:28:57.885173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.528 [2024-12-16 13:28:57.885180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.528 [2024-12-16 13:28:57.885187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.528 [2024-12-16 13:28:57.885196] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.528 [2024-12-16 13:28:57.885205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.528 [2024-12-16 13:28:57.885215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.528 [2024-12-16 13:28:57.885222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:43.528 [2024-12-16 13:28:57.885229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:43.528 [2024-12-16 13:28:57.885236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:43.528 [2024-12-16 13:28:57.885243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:43.528 [2024-12-16 13:28:57.885250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:43.528 [2024-12-16 13:28:57.885257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:43.529 [2024-12-16 13:28:57.885264] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:43.529 [2024-12-16 13:28:57.885271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:43.529 [2024-12-16 13:28:57.885279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:43.529 [2024-12-16 13:28:57.885286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:43.529 [2024-12-16 13:28:57.885293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:43.529 [2024-12-16 13:28:57.885301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:43.529 [2024-12-16 13:28:57.885310] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.529 [2024-12-16 13:28:57.885319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.529 [2024-12-16 13:28:57.885328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.529 [2024-12-16 13:28:57.885335] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.529 [2024-12-16 13:28:57.885344] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.529 [2024-12-16 13:28:57.885352] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.529 [2024-12-16 13:28:57.885360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.885368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.529 [2024-12-16 13:28:57.885376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:24:43.529 [2024-12-16 13:28:57.885384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.907764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.907819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.529 [2024-12-16 13:28:57.907832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.332 ms 00:24:43.529 [2024-12-16 13:28:57.907848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.907953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.907962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:43.529 [2024-12-16 13:28:57.907972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:43.529 [2024-12-16 13:28:57.907981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.956167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.956229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.529 [2024-12-16 13:28:57.956244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.125 ms 00:24:43.529 [2024-12-16 13:28:57.956253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.956311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.956321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.529 [2024-12-16 13:28:57.956331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:43.529 [2024-12-16 13:28:57.956347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.957128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.957171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.529 [2024-12-16 13:28:57.957183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:24:43.529 [2024-12-16 13:28:57.957200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.957351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.957362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.529 [2024-12-16 13:28:57.957372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:24:43.529 [2024-12-16 13:28:57.957380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.976944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.976991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.529 [2024-12-16 13:28:57.977003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.541 ms 00:24:43.529 [2024-12-16 13:28:57.977012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:57.992800] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:43.529 [2024-12-16 13:28:57.992852] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:43.529 [2024-12-16 13:28:57.992866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:57.992875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:43.529 [2024-12-16 13:28:57.992885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.730 ms 00:24:43.529 [2024-12-16 13:28:57.992894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:58.019804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:58.019877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:43.529 [2024-12-16 13:28:58.019891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.850 ms 00:24:43.529 [2024-12-16 13:28:58.019901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:58.033462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:58.033513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:43.529 [2024-12-16 13:28:58.033528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.499 ms 00:24:43.529 [2024-12-16 13:28:58.033537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:58.046598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:58.046668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:43.529 [2024-12-16 13:28:58.046681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.010 ms 00:24:43.529 [2024-12-16 13:28:58.046690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.529 [2024-12-16 13:28:58.047115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.529 [2024-12-16 13:28:58.047130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:43.529 [2024-12-16 13:28:58.047140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:24:43.529 [2024-12-16 13:28:58.047149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.120167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.120247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:43.791 [2024-12-16 13:28:58.120263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.997 ms 00:24:43.791 [2024-12-16 13:28:58.120273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.132506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:43.791 [2024-12-16 13:28:58.136584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.136643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:43.791 [2024-12-16 13:28:58.136655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.246 ms 00:24:43.791 [2024-12-16 13:28:58.136670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.136757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.136769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:43.791 [2024-12-16 13:28:58.136779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:43.791 [2024-12-16 13:28:58.136788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.137927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.138130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:43.791 [2024-12-16 13:28:58.138153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:24:43.791 [2024-12-16 13:28:58.138162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.139808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.139854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:43.791 [2024-12-16 13:28:58.139866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:24:43.791 [2024-12-16 13:28:58.139874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.139916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.139925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:43.791 [2024-12-16 13:28:58.139941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:43.791 [2024-12-16 13:28:58.139949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.139992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:43.791 [2024-12-16 13:28:58.140003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.140015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:43.791 [2024-12-16 13:28:58.140024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:43.791 [2024-12-16 13:28:58.140032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.167484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.167536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:43.791 [2024-12-16 13:28:58.167551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.431 ms 00:24:43.791 [2024-12-16 13:28:58.167559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.167681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.791 [2024-12-16 13:28:58.167694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:43.791 [2024-12-16 13:28:58.167704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:43.791 [2024-12-16 13:28:58.167713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.791 [2024-12-16 13:28:58.169219] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.706 ms, result 0 00:24:45.230  [2024-12-16T13:29:00.379Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-16T13:29:01.767Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-16T13:29:02.712Z] Copying: 57/1024 [MB] (19 MBps) [2024-12-16T13:29:03.656Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-16T13:29:04.600Z] Copying: 86/1024 [MB] (17 MBps) [2024-12-16T13:29:05.550Z] Copying: 106/1024 [MB] (20 MBps) [2024-12-16T13:29:06.493Z] Copying: 127/1024 [MB] (20 MBps) [2024-12-16T13:29:07.436Z] Copying: 140/1024 [MB] (13 MBps) [2024-12-16T13:29:08.380Z] Copying: 155/1024 [MB] (14 MBps) [2024-12-16T13:29:09.768Z] Copying: 170/1024 [MB] (14 MBps) [2024-12-16T13:29:10.711Z] Copying: 195/1024 [MB] (25 MBps) [2024-12-16T13:29:11.657Z] Copying: 213/1024 [MB] (17 MBps) [2024-12-16T13:29:12.603Z] Copying: 228/1024 [MB] (15 MBps) [2024-12-16T13:29:13.549Z] Copying: 239/1024 [MB] (10 MBps) [2024-12-16T13:29:14.493Z] Copying: 252/1024 [MB] (13 MBps) [2024-12-16T13:29:15.437Z] Copying: 273/1024 [MB] (20 MBps) [2024-12-16T13:29:16.379Z] Copying: 293/1024 [MB] (20 MBps) [2024-12-16T13:29:17.768Z] Copying: 313/1024 [MB] (19 MBps) [2024-12-16T13:29:18.712Z] Copying: 326/1024 [MB] (12 MBps) [2024-12-16T13:29:19.657Z] Copying: 337/1024 [MB] (10 MBps) [2024-12-16T13:29:20.602Z] Copying: 354/1024 [MB] (16 MBps) [2024-12-16T13:29:21.547Z] Copying: 375/1024 [MB] (21 MBps) [2024-12-16T13:29:22.492Z] Copying: 390/1024 [MB] (15 MBps) [2024-12-16T13:29:23.435Z] Copying: 408/1024 [MB] (17 MBps) [2024-12-16T13:29:24.380Z] Copying: 421/1024 [MB] (13 MBps) [2024-12-16T13:29:25.768Z] Copying: 435/1024 [MB] (14 MBps) [2024-12-16T13:29:26.713Z] Copying: 449/1024 [MB] (14 MBps) [2024-12-16T13:29:27.658Z] Copying: 471/1024 [MB] (21 MBps) [2024-12-16T13:29:28.692Z] Copying: 491/1024 [MB] (19 MBps) [2024-12-16T13:29:29.639Z] Copying: 507/1024 [MB] (15 MBps) [2024-12-16T13:29:30.585Z] Copying: 528/1024 [MB] (20 MBps) [2024-12-16T13:29:31.530Z] Copying: 550/1024 [MB] (21 MBps) [2024-12-16T13:29:32.475Z] Copying: 578/1024 [MB] (28 MBps) [2024-12-16T13:29:33.420Z] Copying: 606/1024 [MB] (27 MBps) [2024-12-16T13:29:34.365Z] Copying: 629/1024 [MB] (23 MBps) [2024-12-16T13:29:35.759Z] Copying: 652/1024 [MB] (23 MBps) [2024-12-16T13:29:36.705Z] Copying: 675/1024 [MB] (22 MBps) [2024-12-16T13:29:37.649Z] Copying: 687/1024 [MB] (12 MBps) [2024-12-16T13:29:38.594Z] Copying: 706/1024 [MB] (19 MBps) [2024-12-16T13:29:39.538Z] Copying: 721/1024 [MB] (14 MBps) [2024-12-16T13:29:40.481Z] Copying: 738/1024 [MB] (17 MBps) [2024-12-16T13:29:41.426Z] Copying: 751/1024 [MB] (12 MBps) [2024-12-16T13:29:42.372Z] Copying: 766/1024 [MB] (14 MBps) [2024-12-16T13:29:43.759Z] Copying: 776/1024 [MB] (10 MBps) [2024-12-16T13:29:44.703Z] Copying: 787/1024 [MB] (10 MBps) [2024-12-16T13:29:45.649Z] Copying: 797/1024 [MB] (10 MBps) [2024-12-16T13:29:46.606Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-16T13:29:47.549Z] Copying: 819/1024 [MB] (10 MBps) [2024-12-16T13:29:48.492Z] Copying: 830/1024 [MB] (10 MBps) [2024-12-16T13:29:49.437Z] Copying: 841/1024 [MB] (11 MBps) [2024-12-16T13:29:50.380Z] Copying: 852/1024 [MB] (10 MBps) [2024-12-16T13:29:51.768Z] Copying: 862/1024 [MB] (10 MBps) [2024-12-16T13:29:52.711Z] Copying: 874/1024 [MB] (11 MBps) [2024-12-16T13:29:53.655Z] Copying: 885/1024 [MB] (11 MBps) [2024-12-16T13:29:54.599Z] Copying: 902/1024 [MB] (16 MBps) [2024-12-16T13:29:55.545Z] Copying: 915/1024 [MB] (13 MBps) [2024-12-16T13:29:56.487Z] Copying: 936/1024 [MB] (20 MBps) [2024-12-16T13:29:57.467Z] Copying: 947/1024 [MB] (11 MBps) [2024-12-16T13:29:58.429Z] Copying: 962/1024 [MB] (14 MBps) [2024-12-16T13:29:59.372Z] Copying: 981/1024 [MB] (18 MBps) [2024-12-16T13:30:00.759Z] Copying: 1001/1024 [MB] (19 MBps) [2024-12-16T13:30:00.759Z] Copying: 1021/1024 [MB] (20 MBps) [2024-12-16T13:30:01.020Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 13:30:00.901140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.901257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:46.447 [2024-12-16 13:30:00.901277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.447 [2024-12-16 13:30:00.901287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.901316] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:46.447 [2024-12-16 13:30:00.904696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.904754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:46.447 [2024-12-16 13:30:00.904766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:25:46.447 [2024-12-16 13:30:00.904775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.905895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.905923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:46.447 [2024-12-16 13:30:00.905937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:25:46.447 [2024-12-16 13:30:00.905947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.909411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.909433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:46.447 [2024-12-16 13:30:00.909451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:25:46.447 [2024-12-16 13:30:00.909460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.916295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.916347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:46.447 [2024-12-16 13:30:00.916358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.815 ms 00:25:46.447 [2024-12-16 13:30:00.916367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.948192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.948254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:46.447 [2024-12-16 13:30:00.948270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.728 ms 00:25:46.447 [2024-12-16 13:30:00.948279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.966492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.966551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:46.447 [2024-12-16 13:30:00.966566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.152 ms 00:25:46.447 [2024-12-16 13:30:00.966584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:00.976528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:00.976583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:46.447 [2024-12-16 13:30:00.976596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.874 ms 00:25:46.447 [2024-12-16 13:30:00.976605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.447 [2024-12-16 13:30:01.003843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.447 [2024-12-16 13:30:01.003895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:46.447 [2024-12-16 13:30:01.003907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.220 ms 00:25:46.447 [2024-12-16 13:30:01.003916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.708 [2024-12-16 13:30:01.030703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.708 [2024-12-16 13:30:01.030756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:46.708 [2024-12-16 13:30:01.030785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.737 ms 00:25:46.708 [2024-12-16 13:30:01.030793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.708 [2024-12-16 13:30:01.056892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.708 [2024-12-16 13:30:01.056945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:46.708 [2024-12-16 13:30:01.056958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.048 ms 00:25:46.708 [2024-12-16 13:30:01.056966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.708 [2024-12-16 13:30:01.082931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.708 [2024-12-16 13:30:01.082981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:46.708 [2024-12-16 13:30:01.082994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.849 ms 00:25:46.708 [2024-12-16 13:30:01.083002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.708 [2024-12-16 13:30:01.083052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:46.708 [2024-12-16 13:30:01.083080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:46.708 [2024-12-16 13:30:01.083092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:46.708 [2024-12-16 13:30:01.083102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:46.708 [2024-12-16 13:30:01.083245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:46.709 [2024-12-16 13:30:01.083928] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:46.709 [2024-12-16 13:30:01.083937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 631ac7f5-b8f3-43af-8b2c-7356abbb6320 00:25:46.709 [2024-12-16 13:30:01.083946] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:46.709 [2024-12-16 13:30:01.083955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:46.709 [2024-12-16 13:30:01.083963] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:46.709 [2024-12-16 13:30:01.083980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:46.709 [2024-12-16 13:30:01.083989] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:46.709 [2024-12-16 13:30:01.083998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:46.709 [2024-12-16 13:30:01.084006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:46.709 [2024-12-16 13:30:01.084023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:46.709 [2024-12-16 13:30:01.084030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:46.709 [2024-12-16 13:30:01.084038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.709 [2024-12-16 13:30:01.084046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:46.709 [2024-12-16 13:30:01.084060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:25:46.709 [2024-12-16 13:30:01.084068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.098604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.709 [2024-12-16 13:30:01.098838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:46.709 [2024-12-16 13:30:01.098859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.481 ms 00:25:46.709 [2024-12-16 13:30:01.098867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.099130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.709 [2024-12-16 13:30:01.099142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:46.709 [2024-12-16 13:30:01.099152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:25:46.709 [2024-12-16 13:30:01.099159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.141915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.142133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.709 [2024-12-16 13:30:01.142156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.142166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.142250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.142260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.709 [2024-12-16 13:30:01.142269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.142278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.142366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.142377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.709 [2024-12-16 13:30:01.142386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.142394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.142411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.142425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.709 [2024-12-16 13:30:01.142434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.142442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.232827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.233050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.709 [2024-12-16 13:30:01.233076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.233086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.268736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.268802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.709 [2024-12-16 13:30:01.268815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.268824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.268918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.268929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.709 [2024-12-16 13:30:01.268939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.268949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.268996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.269007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.709 [2024-12-16 13:30:01.269021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.269030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.269144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.269155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.709 [2024-12-16 13:30:01.269164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.709 [2024-12-16 13:30:01.269172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.709 [2024-12-16 13:30:01.269209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.709 [2024-12-16 13:30:01.269220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:46.710 [2024-12-16 13:30:01.269229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.710 [2024-12-16 13:30:01.269242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.710 [2024-12-16 13:30:01.269294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.710 [2024-12-16 13:30:01.269311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.710 [2024-12-16 13:30:01.269321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.710 [2024-12-16 13:30:01.269329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.710 [2024-12-16 13:30:01.269389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.710 [2024-12-16 13:30:01.269400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.710 [2024-12-16 13:30:01.269413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.710 [2024-12-16 13:30:01.269422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.710 [2024-12-16 13:30:01.269583] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.401 ms, result 0 00:25:48.096 00:25:48.096 00:25:48.096 13:30:02 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:50.007 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:50.007 Process with pid 75651 is not found 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75651 00:25:50.007 13:30:04 -- common/autotest_common.sh@936 -- # '[' -z 75651 ']' 00:25:50.007 13:30:04 -- common/autotest_common.sh@940 -- # kill -0 75651 00:25:50.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75651) - No such process 00:25:50.007 13:30:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75651 is not found' 00:25:50.007 13:30:04 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:50.268 Remove shared memory files 00:25:50.268 13:30:04 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:50.268 13:30:04 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:50.268 13:30:04 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:50.268 13:30:04 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:50.268 13:30:04 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:50.268 13:30:04 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:50.268 13:30:04 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:50.268 00:25:50.268 real 4m4.510s 00:25:50.268 user 4m30.889s 00:25:50.268 sys 0m27.821s 00:25:50.268 13:30:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:50.268 13:30:04 -- common/autotest_common.sh@10 -- # set +x 00:25:50.528 ************************************ 00:25:50.528 END TEST ftl_dirty_shutdown 00:25:50.528 ************************************ 00:25:50.528 13:30:04 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:50.528 13:30:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:50.528 13:30:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.528 13:30:04 -- common/autotest_common.sh@10 -- # set +x 00:25:50.528 ************************************ 00:25:50.528 START TEST ftl_upgrade_shutdown 00:25:50.528 ************************************ 00:25:50.528 13:30:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:50.528 * Looking for test storage... 00:25:50.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:50.528 13:30:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:50.528 13:30:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:50.528 13:30:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:50.528 13:30:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:50.528 13:30:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:50.528 13:30:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:50.528 13:30:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:50.528 13:30:05 -- scripts/common.sh@335 -- # IFS=.-: 00:25:50.528 13:30:05 -- scripts/common.sh@335 -- # read -ra ver1 00:25:50.528 13:30:05 -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.528 13:30:05 -- scripts/common.sh@336 -- # read -ra ver2 00:25:50.528 13:30:05 -- scripts/common.sh@337 -- # local 'op=<' 00:25:50.528 13:30:05 -- scripts/common.sh@339 -- # ver1_l=2 00:25:50.528 13:30:05 -- scripts/common.sh@340 -- # ver2_l=1 00:25:50.528 13:30:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:50.528 13:30:05 -- scripts/common.sh@343 -- # case "$op" in 00:25:50.528 13:30:05 -- scripts/common.sh@344 -- # : 1 00:25:50.528 13:30:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:50.528 13:30:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.529 13:30:05 -- scripts/common.sh@364 -- # decimal 1 00:25:50.529 13:30:05 -- scripts/common.sh@352 -- # local d=1 00:25:50.529 13:30:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.529 13:30:05 -- scripts/common.sh@354 -- # echo 1 00:25:50.529 13:30:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:50.529 13:30:05 -- scripts/common.sh@365 -- # decimal 2 00:25:50.529 13:30:05 -- scripts/common.sh@352 -- # local d=2 00:25:50.529 13:30:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.529 13:30:05 -- scripts/common.sh@354 -- # echo 2 00:25:50.529 13:30:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:50.529 13:30:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:50.529 13:30:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:50.529 13:30:05 -- scripts/common.sh@367 -- # return 0 00:25:50.529 13:30:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.529 13:30:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.529 --rc genhtml_branch_coverage=1 00:25:50.529 --rc genhtml_function_coverage=1 00:25:50.529 --rc genhtml_legend=1 00:25:50.529 --rc geninfo_all_blocks=1 00:25:50.529 --rc geninfo_unexecuted_blocks=1 00:25:50.529 00:25:50.529 ' 00:25:50.529 13:30:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.529 --rc genhtml_branch_coverage=1 00:25:50.529 --rc genhtml_function_coverage=1 00:25:50.529 --rc genhtml_legend=1 00:25:50.529 --rc geninfo_all_blocks=1 00:25:50.529 --rc geninfo_unexecuted_blocks=1 00:25:50.529 00:25:50.529 ' 00:25:50.529 13:30:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.529 --rc genhtml_branch_coverage=1 00:25:50.529 --rc genhtml_function_coverage=1 00:25:50.529 --rc genhtml_legend=1 00:25:50.529 --rc geninfo_all_blocks=1 00:25:50.529 --rc geninfo_unexecuted_blocks=1 00:25:50.529 00:25:50.529 ' 00:25:50.529 13:30:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:50.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.529 --rc genhtml_branch_coverage=1 00:25:50.529 --rc genhtml_function_coverage=1 00:25:50.529 --rc genhtml_legend=1 00:25:50.529 --rc geninfo_all_blocks=1 00:25:50.529 --rc geninfo_unexecuted_blocks=1 00:25:50.529 00:25:50.529 ' 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:50.529 13:30:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:50.529 13:30:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:50.529 13:30:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:50.529 13:30:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:50.529 13:30:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:50.529 13:30:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:50.529 13:30:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:50.529 13:30:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:50.529 13:30:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.529 13:30:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.529 13:30:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:50.529 13:30:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:50.529 13:30:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:50.529 13:30:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:50.529 13:30:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:50.529 13:30:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:50.529 13:30:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.529 13:30:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.529 13:30:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:50.529 13:30:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:50.529 13:30:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:50.529 13:30:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:50.529 13:30:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:50.529 13:30:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:50.529 13:30:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:50.529 13:30:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:50.529 13:30:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.529 13:30:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:50.529 13:30:05 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:50.529 13:30:05 -- ftl/common.sh@81 -- # local base_bdev= 00:25:50.529 13:30:05 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:50.529 13:30:05 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:50.529 13:30:05 -- ftl/common.sh@89 -- # spdk_tgt_pid=78304 00:25:50.529 13:30:05 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:50.529 13:30:05 -- ftl/common.sh@91 -- # waitforlisten 78304 00:25:50.529 13:30:05 -- common/autotest_common.sh@829 -- # '[' -z 78304 ']' 00:25:50.529 13:30:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.529 13:30:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.529 13:30:05 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:50.529 13:30:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.529 13:30:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.529 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:25:50.789 [2024-12-16 13:30:05.144422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:50.789 [2024-12-16 13:30:05.144852] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78304 ] 00:25:50.789 [2024-12-16 13:30:05.303606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.048 [2024-12-16 13:30:05.572063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:51.048 [2024-12-16 13:30:05.572332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.433 13:30:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.433 13:30:06 -- common/autotest_common.sh@862 -- # return 0 00:25:52.433 13:30:06 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:52.433 13:30:06 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:52.433 13:30:06 -- ftl/common.sh@99 -- # local params 00:25:52.433 13:30:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:52.433 13:30:06 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:52.433 13:30:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:52.433 13:30:06 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:25:52.433 13:30:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:52.433 13:30:06 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:52.433 13:30:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:52.433 13:30:06 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:25:52.433 13:30:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:52.433 13:30:06 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:52.433 13:30:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:52.433 13:30:06 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:52.433 13:30:06 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:25:52.433 13:30:06 -- ftl/common.sh@54 -- # local name=base 00:25:52.433 13:30:06 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:25:52.433 13:30:06 -- ftl/common.sh@56 -- # local size=20480 00:25:52.433 13:30:06 -- ftl/common.sh@59 -- # local base_bdev 00:25:52.433 13:30:06 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:25:52.433 13:30:06 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:52.433 13:30:06 -- ftl/common.sh@62 -- # local base_size 00:25:52.434 13:30:06 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:52.434 13:30:06 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:25:52.434 13:30:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:52.434 13:30:06 -- common/autotest_common.sh@1369 -- # local bs 00:25:52.434 13:30:06 -- common/autotest_common.sh@1370 -- # local nb 00:25:52.434 13:30:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:52.695 13:30:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:52.695 { 00:25:52.695 "name": "basen1", 00:25:52.695 "aliases": [ 00:25:52.695 "b470a0a0-d5fe-4d8e-adb1-5b47d0e90951" 00:25:52.695 ], 00:25:52.695 "product_name": "NVMe disk", 00:25:52.695 "block_size": 4096, 00:25:52.695 "num_blocks": 1310720, 00:25:52.695 "uuid": "b470a0a0-d5fe-4d8e-adb1-5b47d0e90951", 00:25:52.695 "assigned_rate_limits": { 00:25:52.695 "rw_ios_per_sec": 0, 00:25:52.695 "rw_mbytes_per_sec": 0, 00:25:52.695 "r_mbytes_per_sec": 0, 00:25:52.695 "w_mbytes_per_sec": 0 00:25:52.695 }, 00:25:52.695 "claimed": true, 00:25:52.695 "claim_type": "read_many_write_one", 00:25:52.695 "zoned": false, 00:25:52.695 "supported_io_types": { 00:25:52.695 "read": true, 00:25:52.695 "write": true, 00:25:52.695 "unmap": true, 00:25:52.695 "write_zeroes": true, 00:25:52.695 "flush": true, 00:25:52.695 "reset": true, 00:25:52.695 "compare": true, 00:25:52.695 "compare_and_write": false, 00:25:52.695 "abort": true, 00:25:52.695 "nvme_admin": true, 00:25:52.695 "nvme_io": true 00:25:52.695 }, 00:25:52.695 "driver_specific": { 00:25:52.695 "nvme": [ 00:25:52.695 { 00:25:52.695 "pci_address": "0000:00:07.0", 00:25:52.695 "trid": { 00:25:52.695 "trtype": "PCIe", 00:25:52.695 "traddr": "0000:00:07.0" 00:25:52.695 }, 00:25:52.695 "ctrlr_data": { 00:25:52.695 "cntlid": 0, 00:25:52.695 "vendor_id": "0x1b36", 00:25:52.695 "model_number": "QEMU NVMe Ctrl", 00:25:52.695 "serial_number": "12341", 00:25:52.695 "firmware_revision": "8.0.0", 00:25:52.695 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:52.695 "oacs": { 00:25:52.695 "security": 0, 00:25:52.695 "format": 1, 00:25:52.695 "firmware": 0, 00:25:52.695 "ns_manage": 1 00:25:52.695 }, 00:25:52.695 "multi_ctrlr": false, 00:25:52.695 "ana_reporting": false 00:25:52.695 }, 00:25:52.695 "vs": { 00:25:52.695 "nvme_version": "1.4" 00:25:52.695 }, 00:25:52.695 "ns_data": { 00:25:52.695 "id": 1, 00:25:52.695 "can_share": false 00:25:52.695 } 00:25:52.695 } 00:25:52.695 ], 00:25:52.695 "mp_policy": "active_passive" 00:25:52.695 } 00:25:52.695 } 00:25:52.695 ]' 00:25:52.695 13:30:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:52.695 13:30:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:52.695 13:30:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:52.695 13:30:07 -- common/autotest_common.sh@1373 -- # nb=1310720 00:25:52.695 13:30:07 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:25:52.695 13:30:07 -- common/autotest_common.sh@1377 -- # echo 5120 00:25:52.695 13:30:07 -- ftl/common.sh@63 -- # base_size=5120 00:25:52.695 13:30:07 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:52.695 13:30:07 -- ftl/common.sh@67 -- # clear_lvols 00:25:52.695 13:30:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:52.695 13:30:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:52.956 13:30:07 -- ftl/common.sh@28 -- # stores=b02f95bf-349a-4d1e-abbf-472708657825 00:25:52.956 13:30:07 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:52.956 13:30:07 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b02f95bf-349a-4d1e-abbf-472708657825 00:25:53.217 13:30:07 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:53.217 13:30:07 -- ftl/common.sh@68 -- # lvs=ce810e5c-ce9a-492d-bb44-05c42f5c5193 00:25:53.217 13:30:07 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ce810e5c-ce9a-492d-bb44-05c42f5c5193 00:25:53.478 13:30:07 -- ftl/common.sh@107 -- # base_bdev=98366fea-ab67-429f-92da-6cd10fdb84a0 00:25:53.478 13:30:07 -- ftl/common.sh@108 -- # [[ -z 98366fea-ab67-429f-92da-6cd10fdb84a0 ]] 00:25:53.478 13:30:07 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 98366fea-ab67-429f-92da-6cd10fdb84a0 5120 00:25:53.478 13:30:07 -- ftl/common.sh@35 -- # local name=cache 00:25:53.478 13:30:07 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:25:53.478 13:30:07 -- ftl/common.sh@37 -- # local base_bdev=98366fea-ab67-429f-92da-6cd10fdb84a0 00:25:53.478 13:30:07 -- ftl/common.sh@38 -- # local cache_size=5120 00:25:53.478 13:30:07 -- ftl/common.sh@41 -- # get_bdev_size 98366fea-ab67-429f-92da-6cd10fdb84a0 00:25:53.478 13:30:07 -- common/autotest_common.sh@1367 -- # local bdev_name=98366fea-ab67-429f-92da-6cd10fdb84a0 00:25:53.478 13:30:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:53.478 13:30:07 -- common/autotest_common.sh@1369 -- # local bs 00:25:53.478 13:30:07 -- common/autotest_common.sh@1370 -- # local nb 00:25:53.478 13:30:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98366fea-ab67-429f-92da-6cd10fdb84a0 00:25:53.738 13:30:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:53.738 { 00:25:53.739 "name": "98366fea-ab67-429f-92da-6cd10fdb84a0", 00:25:53.739 "aliases": [ 00:25:53.739 "lvs/basen1p0" 00:25:53.739 ], 00:25:53.739 "product_name": "Logical Volume", 00:25:53.739 "block_size": 4096, 00:25:53.739 "num_blocks": 5242880, 00:25:53.739 "uuid": "98366fea-ab67-429f-92da-6cd10fdb84a0", 00:25:53.739 "assigned_rate_limits": { 00:25:53.739 "rw_ios_per_sec": 0, 00:25:53.739 "rw_mbytes_per_sec": 0, 00:25:53.739 "r_mbytes_per_sec": 0, 00:25:53.739 "w_mbytes_per_sec": 0 00:25:53.739 }, 00:25:53.739 "claimed": false, 00:25:53.739 "zoned": false, 00:25:53.739 "supported_io_types": { 00:25:53.739 "read": true, 00:25:53.739 "write": true, 00:25:53.739 "unmap": true, 00:25:53.739 "write_zeroes": true, 00:25:53.739 "flush": false, 00:25:53.739 "reset": true, 00:25:53.739 "compare": false, 00:25:53.739 "compare_and_write": false, 00:25:53.739 "abort": false, 00:25:53.739 "nvme_admin": false, 00:25:53.739 "nvme_io": false 00:25:53.739 }, 00:25:53.739 "driver_specific": { 00:25:53.739 "lvol": { 00:25:53.739 "lvol_store_uuid": "ce810e5c-ce9a-492d-bb44-05c42f5c5193", 00:25:53.739 "base_bdev": "basen1", 00:25:53.739 "thin_provision": true, 00:25:53.739 "snapshot": false, 00:25:53.739 "clone": false, 00:25:53.739 "esnap_clone": false 00:25:53.739 } 00:25:53.739 } 00:25:53.739 } 00:25:53.739 ]' 00:25:53.739 13:30:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:53.739 13:30:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:53.739 13:30:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:53.739 13:30:08 -- common/autotest_common.sh@1373 -- # nb=5242880 00:25:53.739 13:30:08 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:25:53.739 13:30:08 -- common/autotest_common.sh@1377 -- # echo 20480 00:25:53.739 13:30:08 -- ftl/common.sh@41 -- # local base_size=1024 00:25:53.739 13:30:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:25:53.739 13:30:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:25:53.999 13:30:08 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:53.999 13:30:08 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:53.999 13:30:08 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:54.261 13:30:08 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:54.261 13:30:08 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:54.261 13:30:08 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 98366fea-ab67-429f-92da-6cd10fdb84a0 -c cachen1p0 --l2p_dram_limit 2 00:25:54.261 [2024-12-16 13:30:08.739730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.261 [2024-12-16 13:30:08.739776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:54.261 [2024-12-16 13:30:08.739790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:54.261 [2024-12-16 13:30:08.739799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.261 [2024-12-16 13:30:08.739846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.261 [2024-12-16 13:30:08.739855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:54.261 [2024-12-16 13:30:08.739863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:54.261 [2024-12-16 13:30:08.739869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.261 [2024-12-16 13:30:08.739886] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:54.261 [2024-12-16 13:30:08.740485] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:54.261 [2024-12-16 13:30:08.740501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.261 [2024-12-16 13:30:08.740508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:54.261 [2024-12-16 13:30:08.740517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:25:54.261 [2024-12-16 13:30:08.740524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.261 [2024-12-16 13:30:08.740553] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 26f5be5f-0c91-41c5-8f7e-90baa067dba4 00:25:54.261 [2024-12-16 13:30:08.741873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.261 [2024-12-16 13:30:08.741994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:54.261 [2024-12-16 13:30:08.742009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:54.261 [2024-12-16 13:30:08.742017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.261 [2024-12-16 13:30:08.748960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.261 [2024-12-16 13:30:08.749067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:54.261 [2024-12-16 13:30:08.749081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.876 ms 00:25:54.261 [2024-12-16 13:30:08.749089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.261 [2024-12-16 13:30:08.749133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.261 [2024-12-16 13:30:08.749141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:54.261 [2024-12-16 13:30:08.749148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:54.262 [2024-12-16 13:30:08.749158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.262 [2024-12-16 13:30:08.749198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.262 [2024-12-16 13:30:08.749210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:54.262 [2024-12-16 13:30:08.749217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:54.262 [2024-12-16 13:30:08.749225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.262 [2024-12-16 13:30:08.749244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:54.262 [2024-12-16 13:30:08.752643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.262 [2024-12-16 13:30:08.752669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:54.262 [2024-12-16 13:30:08.752679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.403 ms 00:25:54.262 [2024-12-16 13:30:08.752686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.262 [2024-12-16 13:30:08.752711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.262 [2024-12-16 13:30:08.752718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:54.262 [2024-12-16 13:30:08.752726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:54.262 [2024-12-16 13:30:08.752732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.262 [2024-12-16 13:30:08.752748] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:54.262 [2024-12-16 13:30:08.752843] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:54.262 [2024-12-16 13:30:08.752857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:54.262 [2024-12-16 13:30:08.752865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:54.262 [2024-12-16 13:30:08.752875] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:54.262 [2024-12-16 13:30:08.752883] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:54.262 [2024-12-16 13:30:08.752892] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:54.262 [2024-12-16 13:30:08.752899] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:54.262 [2024-12-16 13:30:08.752907] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:54.262 [2024-12-16 13:30:08.752913] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:54.262 [2024-12-16 13:30:08.752920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.262 [2024-12-16 13:30:08.752931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:54.262 [2024-12-16 13:30:08.752938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:25:54.262 [2024-12-16 13:30:08.752944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.262 [2024-12-16 13:30:08.752994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.262 [2024-12-16 13:30:08.753000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:54.262 [2024-12-16 13:30:08.753008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:25:54.262 [2024-12-16 13:30:08.753015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.262 [2024-12-16 13:30:08.753073] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:54.262 [2024-12-16 13:30:08.753081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:54.262 [2024-12-16 13:30:08.753089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:54.262 [2024-12-16 13:30:08.753107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:54.262 [2024-12-16 13:30:08.753119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:54.262 [2024-12-16 13:30:08.753126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:54.262 [2024-12-16 13:30:08.753131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:54.262 [2024-12-16 13:30:08.753143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:54.262 [2024-12-16 13:30:08.753151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:54.262 [2024-12-16 13:30:08.753164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:54.262 [2024-12-16 13:30:08.753184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:54.262 [2024-12-16 13:30:08.753191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:54.262 [2024-12-16 13:30:08.753202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:54.262 [2024-12-16 13:30:08.753207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:54.262 [2024-12-16 13:30:08.753219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:54.262 [2024-12-16 13:30:08.753225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:54.262 [2024-12-16 13:30:08.753236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:54.262 [2024-12-16 13:30:08.753241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:54.262 [2024-12-16 13:30:08.753252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:54.262 [2024-12-16 13:30:08.753258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:54.262 [2024-12-16 13:30:08.753271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:54.262 [2024-12-16 13:30:08.753275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:54.262 [2024-12-16 13:30:08.753287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:54.262 [2024-12-16 13:30:08.753293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:54.262 [2024-12-16 13:30:08.753305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753315] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:54.262 [2024-12-16 13:30:08.753321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:54.262 [2024-12-16 13:30:08.753328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:54.262 [2024-12-16 13:30:08.753343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:54.262 [2024-12-16 13:30:08.753348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:54.262 [2024-12-16 13:30:08.753356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:54.262 [2024-12-16 13:30:08.753362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:54.262 [2024-12-16 13:30:08.753370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:54.262 [2024-12-16 13:30:08.753375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:54.262 [2024-12-16 13:30:08.753383] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:54.262 [2024-12-16 13:30:08.753391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:54.262 [2024-12-16 13:30:08.753404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:54.262 [2024-12-16 13:30:08.753423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:54.262 [2024-12-16 13:30:08.753429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:54.262 [2024-12-16 13:30:08.753436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:54.262 [2024-12-16 13:30:08.753442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:54.262 [2024-12-16 13:30:08.753476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:54.262 [2024-12-16 13:30:08.753481] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:54.262 [2024-12-16 13:30:08.753489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753495] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:54.262 [2024-12-16 13:30:08.753502] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:54.262 [2024-12-16 13:30:08.753507] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:54.262 [2024-12-16 13:30:08.753514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:54.263 [2024-12-16 13:30:08.753520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.753527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:54.263 [2024-12-16 13:30:08.753533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.484 ms 00:25:54.263 [2024-12-16 13:30:08.753540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.767210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.767242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:54.263 [2024-12-16 13:30:08.767252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.627 ms 00:25:54.263 [2024-12-16 13:30:08.767260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.767297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.767308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:54.263 [2024-12-16 13:30:08.767318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:54.263 [2024-12-16 13:30:08.767326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.793852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.793883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:54.263 [2024-12-16 13:30:08.793892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.491 ms 00:25:54.263 [2024-12-16 13:30:08.793901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.793927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.793935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:54.263 [2024-12-16 13:30:08.793942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:54.263 [2024-12-16 13:30:08.793950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.794350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.794371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:54.263 [2024-12-16 13:30:08.794379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:25:54.263 [2024-12-16 13:30:08.794387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.794423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.794433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:54.263 [2024-12-16 13:30:08.794439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:54.263 [2024-12-16 13:30:08.794446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.808340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.808445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:54.263 [2024-12-16 13:30:08.808458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.877 ms 00:25:54.263 [2024-12-16 13:30:08.808466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.263 [2024-12-16 13:30:08.818574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:54.263 [2024-12-16 13:30:08.819532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.263 [2024-12-16 13:30:08.819556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:54.263 [2024-12-16 13:30:08.819566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.999 ms 00:25:54.263 [2024-12-16 13:30:08.819572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.524 [2024-12-16 13:30:08.844028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.524 [2024-12-16 13:30:08.844058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:54.524 [2024-12-16 13:30:08.844070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.433 ms 00:25:54.524 [2024-12-16 13:30:08.844077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.524 [2024-12-16 13:30:08.844112] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:25:54.524 [2024-12-16 13:30:08.844121] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:25:58.731 [2024-12-16 13:30:12.491503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.491888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:58.731 [2024-12-16 13:30:12.491927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3647.358 ms 00:25:58.731 [2024-12-16 13:30:12.491937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.492068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.492078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:58.731 [2024-12-16 13:30:12.492094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:25:58.731 [2024-12-16 13:30:12.492101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.512622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.512676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:58.731 [2024-12-16 13:30:12.512691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.467 ms 00:25:58.731 [2024-12-16 13:30:12.512699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.531895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.531933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:58.731 [2024-12-16 13:30:12.531949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.147 ms 00:25:58.731 [2024-12-16 13:30:12.531955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.532249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.532258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:58.731 [2024-12-16 13:30:12.532268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:25:58.731 [2024-12-16 13:30:12.532275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.587023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.587145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:58.731 [2024-12-16 13:30:12.587165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 54.704 ms 00:25:58.731 [2024-12-16 13:30:12.587172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.606675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.606706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:58.731 [2024-12-16 13:30:12.606717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.470 ms 00:25:58.731 [2024-12-16 13:30:12.606725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.607842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.607869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:58.731 [2024-12-16 13:30:12.607880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:25:58.731 [2024-12-16 13:30:12.607886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.626252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.626356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:58.731 [2024-12-16 13:30:12.626372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.335 ms 00:25:58.731 [2024-12-16 13:30:12.626378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.626409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.626417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:58.731 [2024-12-16 13:30:12.626425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:58.731 [2024-12-16 13:30:12.626431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.626510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.731 [2024-12-16 13:30:12.626518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:58.731 [2024-12-16 13:30:12.626526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:58.731 [2024-12-16 13:30:12.626532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.731 [2024-12-16 13:30:12.627402] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3887.284 ms, result 0 00:25:58.731 { 00:25:58.731 "name": "ftl", 00:25:58.731 "uuid": "26f5be5f-0c91-41c5-8f7e-90baa067dba4" 00:25:58.731 } 00:25:58.731 13:30:12 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:58.731 [2024-12-16 13:30:12.766717] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.731 13:30:12 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:58.731 13:30:12 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:58.731 [2024-12-16 13:30:13.143088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:58.731 13:30:13 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:58.992 [2024-12-16 13:30:13.331366] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:58.992 13:30:13 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:59.253 Fill FTL, iteration 1 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:59.253 13:30:13 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:59.253 13:30:13 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:59.253 13:30:13 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:59.253 13:30:13 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:59.253 13:30:13 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:59.253 13:30:13 -- ftl/common.sh@163 -- # spdk_ini_pid=78428 00:25:59.253 13:30:13 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:59.253 13:30:13 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:59.253 13:30:13 -- ftl/common.sh@165 -- # waitforlisten 78428 /var/tmp/spdk.tgt.sock 00:25:59.253 13:30:13 -- common/autotest_common.sh@829 -- # '[' -z 78428 ']' 00:25:59.253 13:30:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:59.253 13:30:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.253 13:30:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:59.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:59.253 13:30:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.253 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:25:59.253 [2024-12-16 13:30:13.665074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:59.253 [2024-12-16 13:30:13.665373] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78428 ] 00:25:59.253 [2024-12-16 13:30:13.815777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.514 [2024-12-16 13:30:14.021643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:59.514 [2024-12-16 13:30:14.021951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.897 13:30:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.897 13:30:15 -- common/autotest_common.sh@862 -- # return 0 00:26:00.897 13:30:15 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:00.897 ftln1 00:26:00.897 13:30:15 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:00.897 13:30:15 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:01.157 13:30:15 -- ftl/common.sh@173 -- # echo ']}' 00:26:01.157 13:30:15 -- ftl/common.sh@176 -- # killprocess 78428 00:26:01.157 13:30:15 -- common/autotest_common.sh@936 -- # '[' -z 78428 ']' 00:26:01.157 13:30:15 -- common/autotest_common.sh@940 -- # kill -0 78428 00:26:01.157 13:30:15 -- common/autotest_common.sh@941 -- # uname 00:26:01.157 13:30:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:01.157 13:30:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78428 00:26:01.157 killing process with pid 78428 00:26:01.157 13:30:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:01.157 13:30:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:01.157 13:30:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78428' 00:26:01.157 13:30:15 -- common/autotest_common.sh@955 -- # kill 78428 00:26:01.157 13:30:15 -- common/autotest_common.sh@960 -- # wait 78428 00:26:02.542 13:30:16 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:02.542 13:30:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:02.542 [2024-12-16 13:30:16.859720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:02.542 [2024-12-16 13:30:16.859837] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78477 ] 00:26:02.542 [2024-12-16 13:30:17.009011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.803 [2024-12-16 13:30:17.182387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.188  [2024-12-16T13:30:19.705Z] Copying: 250/1024 [MB] (250 MBps) [2024-12-16T13:30:20.648Z] Copying: 500/1024 [MB] (250 MBps) [2024-12-16T13:30:21.592Z] Copying: 747/1024 [MB] (247 MBps) [2024-12-16T13:30:21.853Z] Copying: 995/1024 [MB] (248 MBps) [2024-12-16T13:30:22.425Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:26:07.851 00:26:07.851 Calculate MD5 checksum, iteration 1 00:26:07.851 13:30:22 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:07.851 13:30:22 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:07.851 13:30:22 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:07.851 13:30:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:07.851 13:30:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:07.851 13:30:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:07.851 13:30:22 -- ftl/common.sh@154 -- # return 0 00:26:07.851 13:30:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:07.851 [2024-12-16 13:30:22.402275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:07.851 [2024-12-16 13:30:22.402381] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78541 ] 00:26:08.113 [2024-12-16 13:30:22.549943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.374 [2024-12-16 13:30:22.718164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.760  [2024-12-16T13:30:24.596Z] Copying: 670/1024 [MB] (670 MBps) [2024-12-16T13:30:25.540Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:26:10.966 00:26:10.966 13:30:25 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:10.966 13:30:25 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:12.966 13:30:27 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:12.966 Fill FTL, iteration 2 00:26:12.966 13:30:27 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=41c14f6f1cd9865238184614ac2cbfdb 00:26:12.966 13:30:27 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:12.966 13:30:27 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:12.966 13:30:27 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:12.966 13:30:27 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:12.966 13:30:27 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:12.966 13:30:27 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:12.966 13:30:27 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:12.966 13:30:27 -- ftl/common.sh@154 -- # return 0 00:26:12.966 13:30:27 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:12.966 [2024-12-16 13:30:27.390508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:12.966 [2024-12-16 13:30:27.390619] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78598 ] 00:26:13.228 [2024-12-16 13:30:27.539909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.228 [2024-12-16 13:30:27.747879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.616  [2024-12-16T13:30:30.138Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-16T13:30:31.528Z] Copying: 443/1024 [MB] (247 MBps) [2024-12-16T13:30:32.471Z] Copying: 692/1024 [MB] (249 MBps) [2024-12-16T13:30:32.471Z] Copying: 944/1024 [MB] (252 MBps) [2024-12-16T13:30:33.416Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:26:18.842 00:26:18.842 13:30:33 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:18.842 13:30:33 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:18.842 Calculate MD5 checksum, iteration 2 00:26:18.842 13:30:33 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.842 13:30:33 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.842 13:30:33 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.842 13:30:33 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.842 13:30:33 -- ftl/common.sh@154 -- # return 0 00:26:18.842 13:30:33 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.842 [2024-12-16 13:30:33.201167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:18.842 [2024-12-16 13:30:33.201447] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78656 ] 00:26:18.842 [2024-12-16 13:30:33.353181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.103 [2024-12-16 13:30:33.530252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.492  [2024-12-16T13:30:35.638Z] Copying: 637/1024 [MB] (637 MBps) [2024-12-16T13:30:37.025Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:26:22.451 00:26:22.451 13:30:36 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:22.451 13:30:36 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:24.355 13:30:38 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:24.355 13:30:38 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a79dde2d557a426133949a27678f6ddb 00:26:24.355 13:30:38 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:24.355 13:30:38 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:24.355 13:30:38 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:24.355 [2024-12-16 13:30:38.856812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.355 [2024-12-16 13:30:38.856853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:24.355 [2024-12-16 13:30:38.856864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:24.355 [2024-12-16 13:30:38.856873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.355 [2024-12-16 13:30:38.856893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.355 [2024-12-16 13:30:38.856900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:24.355 [2024-12-16 13:30:38.856906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:24.355 [2024-12-16 13:30:38.856912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.355 [2024-12-16 13:30:38.856927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.355 [2024-12-16 13:30:38.856933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:24.355 [2024-12-16 13:30:38.856945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:24.355 [2024-12-16 13:30:38.856950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.355 [2024-12-16 13:30:38.856999] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.179 ms, result 0 00:26:24.355 true 00:26:24.355 13:30:38 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:24.614 { 00:26:24.614 "name": "ftl", 00:26:24.614 "properties": [ 00:26:24.614 { 00:26:24.614 "name": "superblock_version", 00:26:24.614 "value": 5, 00:26:24.614 "read-only": true 00:26:24.614 }, 00:26:24.614 { 00:26:24.614 "name": "base_device", 00:26:24.614 "bands": [ 00:26:24.614 { 00:26:24.614 "id": 0, 00:26:24.614 "state": "FREE", 00:26:24.614 "validity": 0.0 00:26:24.614 }, 00:26:24.614 { 00:26:24.614 "id": 1, 00:26:24.614 "state": "FREE", 00:26:24.614 "validity": 0.0 00:26:24.614 }, 00:26:24.614 { 00:26:24.615 "id": 2, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 3, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 4, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 5, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 6, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 7, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 8, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 9, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 10, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 11, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 12, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 13, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 14, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 15, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 16, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 17, 00:26:24.615 "state": "FREE", 00:26:24.615 "validity": 0.0 00:26:24.615 } 00:26:24.615 ], 00:26:24.615 "read-only": true 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "name": "cache_device", 00:26:24.615 "type": "bdev", 00:26:24.615 "chunks": [ 00:26:24.615 { 00:26:24.615 "id": 0, 00:26:24.615 "state": "CLOSED", 00:26:24.615 "utilization": 1.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 1, 00:26:24.615 "state": "CLOSED", 00:26:24.615 "utilization": 1.0 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 2, 00:26:24.615 "state": "OPEN", 00:26:24.615 "utilization": 0.001953125 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "id": 3, 00:26:24.615 "state": "OPEN", 00:26:24.615 "utilization": 0.0 00:26:24.615 } 00:26:24.615 ], 00:26:24.615 "read-only": true 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "name": "verbose_mode", 00:26:24.615 "value": true, 00:26:24.615 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:24.615 }, 00:26:24.615 { 00:26:24.615 "name": "prep_upgrade_on_shutdown", 00:26:24.615 "value": false, 00:26:24.615 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:24.615 } 00:26:24.615 ] 00:26:24.615 } 00:26:24.615 13:30:39 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:24.874 [2024-12-16 13:30:39.245112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.874 [2024-12-16 13:30:39.245143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:24.874 [2024-12-16 13:30:39.245152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:24.874 [2024-12-16 13:30:39.245157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.874 [2024-12-16 13:30:39.245173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.874 [2024-12-16 13:30:39.245178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:24.874 [2024-12-16 13:30:39.245184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:24.874 [2024-12-16 13:30:39.245190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.874 [2024-12-16 13:30:39.245205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.874 [2024-12-16 13:30:39.245211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:24.874 [2024-12-16 13:30:39.245216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:24.874 [2024-12-16 13:30:39.245221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.874 [2024-12-16 13:30:39.245261] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.141 ms, result 0 00:26:24.874 true 00:26:24.874 13:30:39 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:24.874 13:30:39 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:24.874 13:30:39 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:25.133 13:30:39 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:25.133 13:30:39 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:25.133 13:30:39 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:25.133 [2024-12-16 13:30:39.633424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.133 [2024-12-16 13:30:39.633455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:25.133 [2024-12-16 13:30:39.633463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:25.133 [2024-12-16 13:30:39.633468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.133 [2024-12-16 13:30:39.633485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.133 [2024-12-16 13:30:39.633491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:25.133 [2024-12-16 13:30:39.633496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:25.133 [2024-12-16 13:30:39.633502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.133 [2024-12-16 13:30:39.633516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.133 [2024-12-16 13:30:39.633522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:25.133 [2024-12-16 13:30:39.633527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:25.133 [2024-12-16 13:30:39.633532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.133 [2024-12-16 13:30:39.633572] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.137 ms, result 0 00:26:25.133 true 00:26:25.133 13:30:39 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:25.392 { 00:26:25.392 "name": "ftl", 00:26:25.392 "properties": [ 00:26:25.392 { 00:26:25.392 "name": "superblock_version", 00:26:25.392 "value": 5, 00:26:25.392 "read-only": true 00:26:25.392 }, 00:26:25.392 { 00:26:25.392 "name": "base_device", 00:26:25.392 "bands": [ 00:26:25.392 { 00:26:25.392 "id": 0, 00:26:25.392 "state": "FREE", 00:26:25.392 "validity": 0.0 00:26:25.392 }, 00:26:25.392 { 00:26:25.392 "id": 1, 00:26:25.392 "state": "FREE", 00:26:25.392 "validity": 0.0 00:26:25.392 }, 00:26:25.392 { 00:26:25.392 "id": 2, 00:26:25.392 "state": "FREE", 00:26:25.392 "validity": 0.0 00:26:25.392 }, 00:26:25.392 { 00:26:25.392 "id": 3, 00:26:25.392 "state": "FREE", 00:26:25.392 "validity": 0.0 00:26:25.392 }, 00:26:25.392 { 00:26:25.392 "id": 4, 00:26:25.392 "state": "FREE", 00:26:25.392 "validity": 0.0 00:26:25.392 }, 00:26:25.392 { 00:26:25.393 "id": 5, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 6, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 7, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 8, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 9, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 10, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 11, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 12, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 13, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 14, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 15, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 16, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 17, 00:26:25.393 "state": "FREE", 00:26:25.393 "validity": 0.0 00:26:25.393 } 00:26:25.393 ], 00:26:25.393 "read-only": true 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "name": "cache_device", 00:26:25.393 "type": "bdev", 00:26:25.393 "chunks": [ 00:26:25.393 { 00:26:25.393 "id": 0, 00:26:25.393 "state": "CLOSED", 00:26:25.393 "utilization": 1.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 1, 00:26:25.393 "state": "CLOSED", 00:26:25.393 "utilization": 1.0 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 2, 00:26:25.393 "state": "OPEN", 00:26:25.393 "utilization": 0.001953125 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "id": 3, 00:26:25.393 "state": "OPEN", 00:26:25.393 "utilization": 0.0 00:26:25.393 } 00:26:25.393 ], 00:26:25.393 "read-only": true 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "name": "verbose_mode", 00:26:25.393 "value": true, 00:26:25.393 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:25.393 }, 00:26:25.393 { 00:26:25.393 "name": "prep_upgrade_on_shutdown", 00:26:25.393 "value": true, 00:26:25.393 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:25.393 } 00:26:25.393 ] 00:26:25.393 } 00:26:25.393 13:30:39 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:25.393 13:30:39 -- ftl/common.sh@130 -- # [[ -n 78304 ]] 00:26:25.393 13:30:39 -- ftl/common.sh@131 -- # killprocess 78304 00:26:25.393 13:30:39 -- common/autotest_common.sh@936 -- # '[' -z 78304 ']' 00:26:25.393 13:30:39 -- common/autotest_common.sh@940 -- # kill -0 78304 00:26:25.393 13:30:39 -- common/autotest_common.sh@941 -- # uname 00:26:25.393 13:30:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:25.393 13:30:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78304 00:26:25.393 killing process with pid 78304 00:26:25.393 13:30:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:25.393 13:30:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:25.393 13:30:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78304' 00:26:25.393 13:30:39 -- common/autotest_common.sh@955 -- # kill 78304 00:26:25.393 13:30:39 -- common/autotest_common.sh@960 -- # wait 78304 00:26:25.961 [2024-12-16 13:30:40.383700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:25.961 [2024-12-16 13:30:40.395919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.961 [2024-12-16 13:30:40.395954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:25.961 [2024-12-16 13:30:40.395964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:25.961 [2024-12-16 13:30:40.395970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.961 [2024-12-16 13:30:40.395987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:25.961 [2024-12-16 13:30:40.398094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.961 [2024-12-16 13:30:40.398120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:25.961 [2024-12-16 13:30:40.398128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.096 ms 00:26:25.961 [2024-12-16 13:30:40.398135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.531429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.531475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:34.102 [2024-12-16 13:30:48.531487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8133.246 ms 00:26:34.102 [2024-12-16 13:30:48.531494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.532644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.532658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:34.102 [2024-12-16 13:30:48.532665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.134 ms 00:26:34.102 [2024-12-16 13:30:48.532671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.533521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.533637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:34.102 [2024-12-16 13:30:48.533650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.830 ms 00:26:34.102 [2024-12-16 13:30:48.533657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.541505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.541598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:34.102 [2024-12-16 13:30:48.541609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.812 ms 00:26:34.102 [2024-12-16 13:30:48.541615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.546896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.546922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:34.102 [2024-12-16 13:30:48.546931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.247 ms 00:26:34.102 [2024-12-16 13:30:48.546938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.546991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.546998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:34.102 [2024-12-16 13:30:48.547004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:34.102 [2024-12-16 13:30:48.547021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.554374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.554398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:34.102 [2024-12-16 13:30:48.554405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.341 ms 00:26:34.102 [2024-12-16 13:30:48.554410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.561879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.561902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:34.102 [2024-12-16 13:30:48.561908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.445 ms 00:26:34.102 [2024-12-16 13:30:48.561913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.569292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.102 [2024-12-16 13:30:48.569322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:34.102 [2024-12-16 13:30:48.569328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.355 ms 00:26:34.102 [2024-12-16 13:30:48.569334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.102 [2024-12-16 13:30:48.576607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.103 [2024-12-16 13:30:48.576707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:34.103 [2024-12-16 13:30:48.576719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.222 ms 00:26:34.103 [2024-12-16 13:30:48.576724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.576744] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:34.103 [2024-12-16 13:30:48.576754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:34.103 [2024-12-16 13:30:48.576762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:34.103 [2024-12-16 13:30:48.576768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:34.103 [2024-12-16 13:30:48.576774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:34.103 [2024-12-16 13:30:48.576867] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:34.103 [2024-12-16 13:30:48.576873] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 26f5be5f-0c91-41c5-8f7e-90baa067dba4 00:26:34.103 [2024-12-16 13:30:48.576879] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:34.103 [2024-12-16 13:30:48.576884] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:34.103 [2024-12-16 13:30:48.576890] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:34.103 [2024-12-16 13:30:48.576895] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:34.103 [2024-12-16 13:30:48.576901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:34.103 [2024-12-16 13:30:48.576907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:34.103 [2024-12-16 13:30:48.576915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:34.103 [2024-12-16 13:30:48.576920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:34.103 [2024-12-16 13:30:48.576925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:34.103 [2024-12-16 13:30:48.576931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.103 [2024-12-16 13:30:48.576938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:34.103 [2024-12-16 13:30:48.576944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:26:34.103 [2024-12-16 13:30:48.576950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.586735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.103 [2024-12-16 13:30:48.586756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:34.103 [2024-12-16 13:30:48.586764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.772 ms 00:26:34.103 [2024-12-16 13:30:48.586770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.586924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.103 [2024-12-16 13:30:48.586930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:34.103 [2024-12-16 13:30:48.586936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:26:34.103 [2024-12-16 13:30:48.586941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.621799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.103 [2024-12-16 13:30:48.621824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:34.103 [2024-12-16 13:30:48.621832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.103 [2024-12-16 13:30:48.621841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.621864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.103 [2024-12-16 13:30:48.621870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:34.103 [2024-12-16 13:30:48.621875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.103 [2024-12-16 13:30:48.621881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.621926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.103 [2024-12-16 13:30:48.621933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:34.103 [2024-12-16 13:30:48.621939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.103 [2024-12-16 13:30:48.621944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.103 [2024-12-16 13:30:48.621959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.103 [2024-12-16 13:30:48.621965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:34.103 [2024-12-16 13:30:48.621970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.103 [2024-12-16 13:30:48.621976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.681175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.681207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:34.363 [2024-12-16 13:30:48.681217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.681224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:34.363 [2024-12-16 13:30:48.703269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:34.363 [2024-12-16 13:30:48.703329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:34.363 [2024-12-16 13:30:48.703380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:34.363 [2024-12-16 13:30:48.703463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:34.363 [2024-12-16 13:30:48.703505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:34.363 [2024-12-16 13:30:48.703549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.363 [2024-12-16 13:30:48.703596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:34.363 [2024-12-16 13:30:48.703603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.363 [2024-12-16 13:30:48.703609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.363 [2024-12-16 13:30:48.703714] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8307.748 ms, result 0 00:26:38.569 13:30:52 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:38.569 13:30:52 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:38.569 13:30:52 -- ftl/common.sh@81 -- # local base_bdev= 00:26:38.569 13:30:52 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:38.569 13:30:52 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:38.569 13:30:52 -- ftl/common.sh@89 -- # spdk_tgt_pid=78872 00:26:38.569 13:30:52 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:38.569 13:30:52 -- ftl/common.sh@91 -- # waitforlisten 78872 00:26:38.569 13:30:52 -- common/autotest_common.sh@829 -- # '[' -z 78872 ']' 00:26:38.569 13:30:52 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:38.569 13:30:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.569 13:30:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.569 13:30:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.569 13:30:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.569 13:30:52 -- common/autotest_common.sh@10 -- # set +x 00:26:38.569 [2024-12-16 13:30:52.804739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.569 [2024-12-16 13:30:52.804901] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78872 ] 00:26:38.569 [2024-12-16 13:30:52.958710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.830 [2024-12-16 13:30:53.148663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:38.830 [2024-12-16 13:30:53.148877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.403 [2024-12-16 13:30:53.726487] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:39.403 [2024-12-16 13:30:53.726548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:39.403 [2024-12-16 13:30:53.866823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.403 [2024-12-16 13:30:53.866859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:39.403 [2024-12-16 13:30:53.866870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:39.403 [2024-12-16 13:30:53.866877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.403 [2024-12-16 13:30:53.866917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.403 [2024-12-16 13:30:53.866927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:39.403 [2024-12-16 13:30:53.866935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:39.403 [2024-12-16 13:30:53.866941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.403 [2024-12-16 13:30:53.866957] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:39.403 [2024-12-16 13:30:53.867510] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:39.403 [2024-12-16 13:30:53.867522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.403 [2024-12-16 13:30:53.867529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:39.403 [2024-12-16 13:30:53.867537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:26:39.403 [2024-12-16 13:30:53.867543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.403 [2024-12-16 13:30:53.868807] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:39.403 [2024-12-16 13:30:53.878698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.403 [2024-12-16 13:30:53.878835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:39.404 [2024-12-16 13:30:53.878850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.892 ms 00:26:39.404 [2024-12-16 13:30:53.878857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.878905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.878913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:39.404 [2024-12-16 13:30:53.878920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:39.404 [2024-12-16 13:30:53.878926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.885106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.885130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:39.404 [2024-12-16 13:30:53.885138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.130 ms 00:26:39.404 [2024-12-16 13:30:53.885148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.885179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.885187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:39.404 [2024-12-16 13:30:53.885194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:39.404 [2024-12-16 13:30:53.885201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.885237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.885244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:39.404 [2024-12-16 13:30:53.885251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:39.404 [2024-12-16 13:30:53.885257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.885281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:39.404 [2024-12-16 13:30:53.888377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.888401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:39.404 [2024-12-16 13:30:53.888411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.105 ms 00:26:39.404 [2024-12-16 13:30:53.888418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.888445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.888452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:39.404 [2024-12-16 13:30:53.888460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:39.404 [2024-12-16 13:30:53.888466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.888483] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:39.404 [2024-12-16 13:30:53.888500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:39.404 [2024-12-16 13:30:53.888527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:39.404 [2024-12-16 13:30:53.888541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:39.404 [2024-12-16 13:30:53.888601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:39.404 [2024-12-16 13:30:53.888609] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:39.404 [2024-12-16 13:30:53.888618] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:39.404 [2024-12-16 13:30:53.888639] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:39.404 [2024-12-16 13:30:53.888647] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:39.404 [2024-12-16 13:30:53.888654] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:39.404 [2024-12-16 13:30:53.888664] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:39.404 [2024-12-16 13:30:53.888671] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:39.404 [2024-12-16 13:30:53.888679] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:39.404 [2024-12-16 13:30:53.888685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.888691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:39.404 [2024-12-16 13:30:53.888698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:26:39.404 [2024-12-16 13:30:53.888705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.888753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.404 [2024-12-16 13:30:53.888760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:39.404 [2024-12-16 13:30:53.888766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:39.404 [2024-12-16 13:30:53.888772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.404 [2024-12-16 13:30:53.888832] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:39.404 [2024-12-16 13:30:53.888840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:39.404 [2024-12-16 13:30:53.888849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:39.404 [2024-12-16 13:30:53.888856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:39.404 [2024-12-16 13:30:53.888869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:39.404 [2024-12-16 13:30:53.888880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:39.404 [2024-12-16 13:30:53.888886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:39.404 [2024-12-16 13:30:53.888892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:39.404 [2024-12-16 13:30:53.888903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:39.404 [2024-12-16 13:30:53.888908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:39.404 [2024-12-16 13:30:53.888919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:39.404 [2024-12-16 13:30:53.888936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:39.404 [2024-12-16 13:30:53.888947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.888953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:39.404 [2024-12-16 13:30:53.888960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:39.404 [2024-12-16 13:30:53.888966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:39.404 [2024-12-16 13:30:53.888972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:39.404 [2024-12-16 13:30:53.888977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:39.404 [2024-12-16 13:30:53.888983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:39.404 [2024-12-16 13:30:53.888988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:39.404 [2024-12-16 13:30:53.888994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:39.404 [2024-12-16 13:30:53.888999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:39.404 [2024-12-16 13:30:53.889005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:39.404 [2024-12-16 13:30:53.889010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:39.404 [2024-12-16 13:30:53.889016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:39.404 [2024-12-16 13:30:53.889022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:39.404 [2024-12-16 13:30:53.889027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:39.404 [2024-12-16 13:30:53.889033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:39.404 [2024-12-16 13:30:53.889038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:39.404 [2024-12-16 13:30:53.889044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:39.404 [2024-12-16 13:30:53.889049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.889055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:39.404 [2024-12-16 13:30:53.889060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:39.404 [2024-12-16 13:30:53.889065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.889070] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:39.404 [2024-12-16 13:30:53.889077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:39.404 [2024-12-16 13:30:53.889084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:39.404 [2024-12-16 13:30:53.889090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:39.404 [2024-12-16 13:30:53.889096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:39.404 [2024-12-16 13:30:53.889102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:39.404 [2024-12-16 13:30:53.889108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:39.404 [2024-12-16 13:30:53.889114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:39.404 [2024-12-16 13:30:53.889120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:39.404 [2024-12-16 13:30:53.889125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:39.404 [2024-12-16 13:30:53.889133] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:39.404 [2024-12-16 13:30:53.889141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.404 [2024-12-16 13:30:53.889151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:39.404 [2024-12-16 13:30:53.889157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:39.404 [2024-12-16 13:30:53.889163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:39.404 [2024-12-16 13:30:53.889168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:39.404 [2024-12-16 13:30:53.889174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:39.404 [2024-12-16 13:30:53.889184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:39.405 [2024-12-16 13:30:53.889190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:39.405 [2024-12-16 13:30:53.889196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:39.405 [2024-12-16 13:30:53.889202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:39.405 [2024-12-16 13:30:53.889208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:39.405 [2024-12-16 13:30:53.889214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:39.405 [2024-12-16 13:30:53.889220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:39.405 [2024-12-16 13:30:53.889226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:39.405 [2024-12-16 13:30:53.889232] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:39.405 [2024-12-16 13:30:53.889239] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.405 [2024-12-16 13:30:53.889245] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:39.405 [2024-12-16 13:30:53.889251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:39.405 [2024-12-16 13:30:53.889257] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:39.405 [2024-12-16 13:30:53.889263] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:39.405 [2024-12-16 13:30:53.889269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.889276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:39.405 [2024-12-16 13:30:53.889282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:26:39.405 [2024-12-16 13:30:53.889289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.903006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.903043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:39.405 [2024-12-16 13:30:53.903053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.669 ms 00:26:39.405 [2024-12-16 13:30:53.903060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.903094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.903101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:39.405 [2024-12-16 13:30:53.903109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:39.405 [2024-12-16 13:30:53.903117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.929434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.929461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:39.405 [2024-12-16 13:30:53.929470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.276 ms 00:26:39.405 [2024-12-16 13:30:53.929477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.929500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.929507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:39.405 [2024-12-16 13:30:53.929514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:39.405 [2024-12-16 13:30:53.929521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.929954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.929970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:39.405 [2024-12-16 13:30:53.929978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:26:39.405 [2024-12-16 13:30:53.929985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.930017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.930024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:39.405 [2024-12-16 13:30:53.930031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:39.405 [2024-12-16 13:30:53.930038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.943785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.943809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:39.405 [2024-12-16 13:30:53.943817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.729 ms 00:26:39.405 [2024-12-16 13:30:53.943824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.954033] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:39.405 [2024-12-16 13:30:53.954059] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:39.405 [2024-12-16 13:30:53.954069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.954076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:39.405 [2024-12-16 13:30:53.954084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.164 ms 00:26:39.405 [2024-12-16 13:30:53.954095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.964596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.964620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:39.405 [2024-12-16 13:30:53.964638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.470 ms 00:26:39.405 [2024-12-16 13:30:53.964645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.405 [2024-12-16 13:30:53.973738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.405 [2024-12-16 13:30:53.973762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:39.405 [2024-12-16 13:30:53.973769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.050 ms 00:26:39.405 [2024-12-16 13:30:53.973775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:53.982333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:53.982357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:39.667 [2024-12-16 13:30:53.982364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.531 ms 00:26:39.667 [2024-12-16 13:30:53.982370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:53.982662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:53.982672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:39.667 [2024-12-16 13:30:53.982680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:26:39.667 [2024-12-16 13:30:53.982687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.031085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.031206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:39.667 [2024-12-16 13:30:54.031220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 48.381 ms 00:26:39.667 [2024-12-16 13:30:54.031227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.039089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:39.667 [2024-12-16 13:30:54.039667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.039684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:39.667 [2024-12-16 13:30:54.039692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.409 ms 00:26:39.667 [2024-12-16 13:30:54.039702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.039750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.039759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:39.667 [2024-12-16 13:30:54.039766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:39.667 [2024-12-16 13:30:54.039772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.039808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.039816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:39.667 [2024-12-16 13:30:54.039824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:39.667 [2024-12-16 13:30:54.039830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.040888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.040903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:39.667 [2024-12-16 13:30:54.040910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:26:39.667 [2024-12-16 13:30:54.040917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.040943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.040950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:39.667 [2024-12-16 13:30:54.040956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:39.667 [2024-12-16 13:30:54.040963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.040993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:39.667 [2024-12-16 13:30:54.041001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.041010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:39.667 [2024-12-16 13:30:54.041016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:39.667 [2024-12-16 13:30:54.041022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.059187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.059213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:39.667 [2024-12-16 13:30:54.059222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.148 ms 00:26:39.667 [2024-12-16 13:30:54.059229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.059288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.667 [2024-12-16 13:30:54.059296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:39.667 [2024-12-16 13:30:54.059304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:39.667 [2024-12-16 13:30:54.059310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.667 [2024-12-16 13:30:54.060181] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 192.999 ms, result 0 00:26:39.667 [2024-12-16 13:30:54.075456] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.667 [2024-12-16 13:30:54.091463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:39.667 [2024-12-16 13:30:54.099585] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:39.929 13:30:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.929 13:30:54 -- common/autotest_common.sh@862 -- # return 0 00:26:39.929 13:30:54 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:39.929 13:30:54 -- ftl/common.sh@95 -- # return 0 00:26:39.929 13:30:54 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:39.929 [2024-12-16 13:30:54.472380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.929 [2024-12-16 13:30:54.472420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:39.929 [2024-12-16 13:30:54.472432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:39.929 [2024-12-16 13:30:54.472439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.929 [2024-12-16 13:30:54.472458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.929 [2024-12-16 13:30:54.472465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:39.929 [2024-12-16 13:30:54.472473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:39.929 [2024-12-16 13:30:54.472483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.929 [2024-12-16 13:30:54.472499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.929 [2024-12-16 13:30:54.472506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:39.929 [2024-12-16 13:30:54.472513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:39.929 [2024-12-16 13:30:54.472519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.929 [2024-12-16 13:30:54.472566] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.181 ms, result 0 00:26:39.929 true 00:26:39.929 13:30:54 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:40.191 { 00:26:40.191 "name": "ftl", 00:26:40.191 "properties": [ 00:26:40.191 { 00:26:40.191 "name": "superblock_version", 00:26:40.191 "value": 5, 00:26:40.191 "read-only": true 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "name": "base_device", 00:26:40.191 "bands": [ 00:26:40.191 { 00:26:40.191 "id": 0, 00:26:40.191 "state": "CLOSED", 00:26:40.191 "validity": 1.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 1, 00:26:40.191 "state": "CLOSED", 00:26:40.191 "validity": 1.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 2, 00:26:40.191 "state": "CLOSED", 00:26:40.191 "validity": 0.007843137254901933 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 3, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 4, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 5, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 6, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 7, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 8, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 9, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 10, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 11, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 12, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 13, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 14, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 15, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 16, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 17, 00:26:40.191 "state": "FREE", 00:26:40.191 "validity": 0.0 00:26:40.191 } 00:26:40.191 ], 00:26:40.191 "read-only": true 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "name": "cache_device", 00:26:40.191 "type": "bdev", 00:26:40.191 "chunks": [ 00:26:40.191 { 00:26:40.191 "id": 0, 00:26:40.191 "state": "OPEN", 00:26:40.191 "utilization": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 1, 00:26:40.191 "state": "OPEN", 00:26:40.191 "utilization": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 2, 00:26:40.191 "state": "FREE", 00:26:40.191 "utilization": 0.0 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "id": 3, 00:26:40.191 "state": "FREE", 00:26:40.191 "utilization": 0.0 00:26:40.191 } 00:26:40.191 ], 00:26:40.191 "read-only": true 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "name": "verbose_mode", 00:26:40.191 "value": true, 00:26:40.191 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:40.191 }, 00:26:40.191 { 00:26:40.191 "name": "prep_upgrade_on_shutdown", 00:26:40.191 "value": false, 00:26:40.191 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:40.191 } 00:26:40.191 ] 00:26:40.191 } 00:26:40.191 13:30:54 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:40.191 13:30:54 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:40.191 13:30:54 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:40.452 13:30:54 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:40.452 13:30:54 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:40.452 13:30:54 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:40.452 13:30:54 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:40.452 13:30:54 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:40.715 Validate MD5 checksum, iteration 1 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:40.715 13:30:55 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:40.715 13:30:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:40.715 13:30:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:40.715 13:30:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:40.715 13:30:55 -- ftl/common.sh@154 -- # return 0 00:26:40.715 13:30:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:40.715 [2024-12-16 13:30:55.121323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:40.715 [2024-12-16 13:30:55.121594] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78917 ] 00:26:40.715 [2024-12-16 13:30:55.272292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.977 [2024-12-16 13:30:55.472862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.892  [2024-12-16T13:30:57.726Z] Copying: 610/1024 [MB] (610 MBps) [2024-12-16T13:31:00.273Z] Copying: 1024/1024 [MB] (average 615 MBps) 00:26:45.699 00:26:45.699 13:30:59 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:45.699 13:30:59 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@103 -- # sum=41c14f6f1cd9865238184614ac2cbfdb 00:26:47.608 Validate MD5 checksum, iteration 2 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@105 -- # [[ 41c14f6f1cd9865238184614ac2cbfdb != \4\1\c\1\4\f\6\f\1\c\d\9\8\6\5\2\3\8\1\8\4\6\1\4\a\c\2\c\b\f\d\b ]] 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:47.608 13:31:01 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:47.608 13:31:01 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:47.608 13:31:01 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:47.608 13:31:01 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:47.608 13:31:01 -- ftl/common.sh@154 -- # return 0 00:26:47.609 13:31:01 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:47.609 [2024-12-16 13:31:01.968989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:47.609 [2024-12-16 13:31:01.969102] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78991 ] 00:26:47.609 [2024-12-16 13:31:02.119608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.868 [2024-12-16 13:31:02.319586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.301  [2024-12-16T13:31:04.447Z] Copying: 693/1024 [MB] (693 MBps) [2024-12-16T13:31:06.356Z] Copying: 1024/1024 [MB] (average 697 MBps) 00:26:51.782 00:26:51.782 13:31:06 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:51.782 13:31:06 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@103 -- # sum=a79dde2d557a426133949a27678f6ddb 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@105 -- # [[ a79dde2d557a426133949a27678f6ddb != \a\7\9\d\d\e\2\d\5\5\7\a\4\2\6\1\3\3\9\4\9\a\2\7\6\7\8\f\6\d\d\b ]] 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:53.697 13:31:08 -- ftl/common.sh@137 -- # [[ -n 78872 ]] 00:26:53.697 13:31:08 -- ftl/common.sh@138 -- # kill -9 78872 00:26:53.697 13:31:08 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:53.697 13:31:08 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:53.697 13:31:08 -- ftl/common.sh@81 -- # local base_bdev= 00:26:53.697 13:31:08 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:53.697 13:31:08 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:53.697 13:31:08 -- ftl/common.sh@89 -- # spdk_tgt_pid=79064 00:26:53.697 13:31:08 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.697 13:31:08 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:53.697 13:31:08 -- ftl/common.sh@91 -- # waitforlisten 79064 00:26:53.697 13:31:08 -- common/autotest_common.sh@829 -- # '[' -z 79064 ']' 00:26:53.697 13:31:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.697 13:31:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.697 13:31:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.697 13:31:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.697 13:31:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.697 [2024-12-16 13:31:08.174440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:53.697 [2024-12-16 13:31:08.174530] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79064 ] 00:26:53.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 78872 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:53.958 [2024-12-16 13:31:08.314793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.958 [2024-12-16 13:31:08.483513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:53.958 [2024-12-16 13:31:08.483724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.531 [2024-12-16 13:31:09.063476] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:54.531 [2024-12-16 13:31:09.063536] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:54.793 [2024-12-16 13:31:09.199592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.793 [2024-12-16 13:31:09.199729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:54.793 [2024-12-16 13:31:09.199746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:54.793 [2024-12-16 13:31:09.199753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.793 [2024-12-16 13:31:09.199804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.199814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:54.794 [2024-12-16 13:31:09.199821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:54.794 [2024-12-16 13:31:09.199827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.199844] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:54.794 [2024-12-16 13:31:09.200352] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:54.794 [2024-12-16 13:31:09.200364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.200370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:54.794 [2024-12-16 13:31:09.200378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:26:54.794 [2024-12-16 13:31:09.200383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.200599] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:54.794 [2024-12-16 13:31:09.213964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.213994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:54.794 [2024-12-16 13:31:09.214003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.364 ms 00:26:54.794 [2024-12-16 13:31:09.214010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.221071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.221099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:54.794 [2024-12-16 13:31:09.221106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:54.794 [2024-12-16 13:31:09.221112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.221358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.221366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:54.794 [2024-12-16 13:31:09.221374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:26:54.794 [2024-12-16 13:31:09.221380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.221406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.221412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:54.794 [2024-12-16 13:31:09.221419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:54.794 [2024-12-16 13:31:09.221426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.221444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.221450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:54.794 [2024-12-16 13:31:09.221456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:54.794 [2024-12-16 13:31:09.221462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.221482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:54.794 [2024-12-16 13:31:09.223902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.223925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:54.794 [2024-12-16 13:31:09.223932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.428 ms 00:26:54.794 [2024-12-16 13:31:09.223938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.223959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.223966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:54.794 [2024-12-16 13:31:09.223974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:54.794 [2024-12-16 13:31:09.223980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.223996] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:54.794 [2024-12-16 13:31:09.224013] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:54.794 [2024-12-16 13:31:09.224039] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:54.794 [2024-12-16 13:31:09.224050] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:54.794 [2024-12-16 13:31:09.224107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:54.794 [2024-12-16 13:31:09.224116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:54.794 [2024-12-16 13:31:09.224126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:54.794 [2024-12-16 13:31:09.224135] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224141] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224147] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:54.794 [2024-12-16 13:31:09.224153] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:54.794 [2024-12-16 13:31:09.224158] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:54.794 [2024-12-16 13:31:09.224164] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:54.794 [2024-12-16 13:31:09.224170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.224175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:54.794 [2024-12-16 13:31:09.224181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.175 ms 00:26:54.794 [2024-12-16 13:31:09.224188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.224236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.794 [2024-12-16 13:31:09.224242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:54.794 [2024-12-16 13:31:09.224248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:54.794 [2024-12-16 13:31:09.224254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.794 [2024-12-16 13:31:09.224310] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:54.794 [2024-12-16 13:31:09.224318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:54.794 [2024-12-16 13:31:09.224324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224338] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:54.794 [2024-12-16 13:31:09.224343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:54.794 [2024-12-16 13:31:09.224353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:54.794 [2024-12-16 13:31:09.224359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:54.794 [2024-12-16 13:31:09.224364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:54.794 [2024-12-16 13:31:09.224374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:54.794 [2024-12-16 13:31:09.224379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:54.794 [2024-12-16 13:31:09.224391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:54.794 [2024-12-16 13:31:09.224406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:54.794 [2024-12-16 13:31:09.224411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:54.794 [2024-12-16 13:31:09.224421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:54.794 [2024-12-16 13:31:09.224426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224431] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:54.794 [2024-12-16 13:31:09.224436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:54.794 [2024-12-16 13:31:09.224442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:54.794 [2024-12-16 13:31:09.224452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:54.794 [2024-12-16 13:31:09.224457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:54.794 [2024-12-16 13:31:09.224466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:54.794 [2024-12-16 13:31:09.224471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:54.794 [2024-12-16 13:31:09.224481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:54.794 [2024-12-16 13:31:09.224486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:54.794 [2024-12-16 13:31:09.224496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:54.794 [2024-12-16 13:31:09.224501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:54.794 [2024-12-16 13:31:09.224511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224520] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:54.794 [2024-12-16 13:31:09.224526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:54.794 [2024-12-16 13:31:09.224532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:54.794 [2024-12-16 13:31:09.224537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.794 [2024-12-16 13:31:09.224543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:54.794 [2024-12-16 13:31:09.224552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:54.794 [2024-12-16 13:31:09.224557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:54.794 [2024-12-16 13:31:09.224562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:54.794 [2024-12-16 13:31:09.224567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:54.795 [2024-12-16 13:31:09.224572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:54.795 [2024-12-16 13:31:09.224578] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:54.795 [2024-12-16 13:31:09.224585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:54.795 [2024-12-16 13:31:09.224597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224613] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:54.795 [2024-12-16 13:31:09.224618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:54.795 [2024-12-16 13:31:09.224624] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:54.795 [2024-12-16 13:31:09.224644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:54.795 [2024-12-16 13:31:09.224650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:54.795 [2024-12-16 13:31:09.224677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:54.795 [2024-12-16 13:31:09.224682] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:54.795 [2024-12-16 13:31:09.224689] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224695] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:54.795 [2024-12-16 13:31:09.224701] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:54.795 [2024-12-16 13:31:09.224706] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:54.795 [2024-12-16 13:31:09.224712] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:54.795 [2024-12-16 13:31:09.224724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.224729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:54.795 [2024-12-16 13:31:09.224735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:26:54.795 [2024-12-16 13:31:09.224742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.236649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.236673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:54.795 [2024-12-16 13:31:09.236683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.873 ms 00:26:54.795 [2024-12-16 13:31:09.236689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.236719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.236726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:54.795 [2024-12-16 13:31:09.236732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:54.795 [2024-12-16 13:31:09.236737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.263115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.263223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:54.795 [2024-12-16 13:31:09.263236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.342 ms 00:26:54.795 [2024-12-16 13:31:09.263243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.263268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.263275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:54.795 [2024-12-16 13:31:09.263282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:54.795 [2024-12-16 13:31:09.263289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.263361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.263369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:54.795 [2024-12-16 13:31:09.263376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:54.795 [2024-12-16 13:31:09.263382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.263411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.263420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:54.795 [2024-12-16 13:31:09.263426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:54.795 [2024-12-16 13:31:09.263432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.277223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.277250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:54.795 [2024-12-16 13:31:09.277258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.774 ms 00:26:54.795 [2024-12-16 13:31:09.277264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.277338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.277346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:54.795 [2024-12-16 13:31:09.277353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.795 [2024-12-16 13:31:09.277359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.291056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.291097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:54.795 [2024-12-16 13:31:09.291105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.683 ms 00:26:54.795 [2024-12-16 13:31:09.291111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.298262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.298358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:54.795 [2024-12-16 13:31:09.298372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:26:54.795 [2024-12-16 13:31:09.298378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.346174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.346209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:54.795 [2024-12-16 13:31:09.346218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 47.756 ms 00:26:54.795 [2024-12-16 13:31:09.346225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.346295] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:54.795 [2024-12-16 13:31:09.346329] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:54.795 [2024-12-16 13:31:09.346360] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:54.795 [2024-12-16 13:31:09.346391] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:54.795 [2024-12-16 13:31:09.346397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.346403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:54.795 [2024-12-16 13:31:09.346413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.136 ms 00:26:54.795 [2024-12-16 13:31:09.346420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.346462] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:54.795 [2024-12-16 13:31:09.346471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.346476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:54.795 [2024-12-16 13:31:09.346483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:54.795 [2024-12-16 13:31:09.346488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.358078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.795 [2024-12-16 13:31:09.358106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:54.795 [2024-12-16 13:31:09.358114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.572 ms 00:26:54.795 [2024-12-16 13:31:09.358121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.795 [2024-12-16 13:31:09.364673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.056 [2024-12-16 13:31:09.364814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:55.056 [2024-12-16 13:31:09.364830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:55.056 [2024-12-16 13:31:09.364837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.056 [2024-12-16 13:31:09.364887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.056 [2024-12-16 13:31:09.364895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:26:55.056 [2024-12-16 13:31:09.364901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:55.056 [2024-12-16 13:31:09.364907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.056 [2024-12-16 13:31:09.365064] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:26:55.318 [2024-12-16 13:31:09.847959] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:26:55.318 [2024-12-16 13:31:09.848284] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:26:56.261 [2024-12-16 13:31:10.547026] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:26:56.261 [2024-12-16 13:31:10.547201] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:56.261 [2024-12-16 13:31:10.547218] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:56.261 [2024-12-16 13:31:10.547232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.547243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:56.261 [2024-12-16 13:31:10.547261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1182.302 ms 00:26:56.261 [2024-12-16 13:31:10.547270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.547323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.547333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:56.261 [2024-12-16 13:31:10.547344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:56.261 [2024-12-16 13:31:10.547353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.561382] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:56.261 [2024-12-16 13:31:10.561542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.561556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:56.261 [2024-12-16 13:31:10.561569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.171 ms 00:26:56.261 [2024-12-16 13:31:10.561578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.562359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.562388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:26:56.261 [2024-12-16 13:31:10.562398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.661 ms 00:26:56.261 [2024-12-16 13:31:10.562407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.564699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.564726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:56.261 [2024-12-16 13:31:10.564738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.273 ms 00:26:56.261 [2024-12-16 13:31:10.564746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.592997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.593203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:26:56.261 [2024-12-16 13:31:10.593227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.222 ms 00:26:56.261 [2024-12-16 13:31:10.593236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.593480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.593506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:56.261 [2024-12-16 13:31:10.593516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:56.261 [2024-12-16 13:31:10.593525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.595186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.595233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:56.261 [2024-12-16 13:31:10.595244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.642 ms 00:26:56.261 [2024-12-16 13:31:10.595251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.595289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.595298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:56.261 [2024-12-16 13:31:10.595308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:56.261 [2024-12-16 13:31:10.595315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.595355] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:56.261 [2024-12-16 13:31:10.595365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.261 [2024-12-16 13:31:10.595374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:56.261 [2024-12-16 13:31:10.595385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:56.261 [2024-12-16 13:31:10.595393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.261 [2024-12-16 13:31:10.595459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.262 [2024-12-16 13:31:10.595469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:56.262 [2024-12-16 13:31:10.595477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:56.262 [2024-12-16 13:31:10.595486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.262 [2024-12-16 13:31:10.596940] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1396.728 ms, result 0 00:26:56.262 [2024-12-16 13:31:10.609669] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.262 [2024-12-16 13:31:10.625682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:56.262 [2024-12-16 13:31:10.633881] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:56.521 Validate MD5 checksum, iteration 1 00:26:56.521 13:31:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.521 13:31:11 -- common/autotest_common.sh@862 -- # return 0 00:26:56.521 13:31:11 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:56.521 13:31:11 -- ftl/common.sh@95 -- # return 0 00:26:56.521 13:31:11 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:56.521 13:31:11 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:56.521 13:31:11 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:56.521 13:31:11 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:56.521 13:31:11 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:56.521 13:31:11 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:56.521 13:31:11 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:56.521 13:31:11 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:56.521 13:31:11 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:56.521 13:31:11 -- ftl/common.sh@154 -- # return 0 00:26:56.521 13:31:11 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:56.781 [2024-12-16 13:31:11.128527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:56.781 [2024-12-16 13:31:11.129170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79105 ] 00:26:56.781 [2024-12-16 13:31:11.280457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.042 [2024-12-16 13:31:11.481173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.956  [2024-12-16T13:31:13.791Z] Copying: 617/1024 [MB] (617 MBps) [2024-12-16T13:31:15.174Z] Copying: 1024/1024 [MB] (average 636 MBps) 00:27:00.600 00:27:00.600 13:31:14 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:00.600 13:31:14 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@103 -- # sum=41c14f6f1cd9865238184614ac2cbfdb 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@105 -- # [[ 41c14f6f1cd9865238184614ac2cbfdb != \4\1\c\1\4\f\6\f\1\c\d\9\8\6\5\2\3\8\1\8\4\6\1\4\a\c\2\c\b\f\d\b ]] 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:02.509 Validate MD5 checksum, iteration 2 00:27:02.509 13:31:16 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:02.509 13:31:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:02.509 13:31:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:02.509 13:31:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:02.509 13:31:16 -- ftl/common.sh@154 -- # return 0 00:27:02.509 13:31:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:02.509 [2024-12-16 13:31:16.928379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:02.509 [2024-12-16 13:31:16.928635] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79172 ] 00:27:02.509 [2024-12-16 13:31:17.076947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.770 [2024-12-16 13:31:17.251404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.153  [2024-12-16T13:31:19.298Z] Copying: 687/1024 [MB] (687 MBps) [2024-12-16T13:31:20.239Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:27:05.665 00:27:05.665 13:31:20 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:05.665 13:31:20 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@103 -- # sum=a79dde2d557a426133949a27678f6ddb 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@105 -- # [[ a79dde2d557a426133949a27678f6ddb != \a\7\9\d\d\e\2\d\5\5\7\a\4\2\6\1\3\3\9\4\9\a\2\7\6\7\8\f\6\d\d\b ]] 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:08.209 13:31:22 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:08.209 13:31:22 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:08.209 13:31:22 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:08.209 13:31:22 -- ftl/common.sh@130 -- # [[ -n 79064 ]] 00:27:08.209 13:31:22 -- ftl/common.sh@131 -- # killprocess 79064 00:27:08.209 13:31:22 -- common/autotest_common.sh@936 -- # '[' -z 79064 ']' 00:27:08.209 13:31:22 -- common/autotest_common.sh@940 -- # kill -0 79064 00:27:08.209 13:31:22 -- common/autotest_common.sh@941 -- # uname 00:27:08.209 13:31:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:08.209 13:31:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79064 00:27:08.209 killing process with pid 79064 00:27:08.209 13:31:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:08.209 13:31:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:08.209 13:31:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79064' 00:27:08.209 13:31:22 -- common/autotest_common.sh@955 -- # kill 79064 00:27:08.209 13:31:22 -- common/autotest_common.sh@960 -- # wait 79064 00:27:08.780 [2024-12-16 13:31:23.055714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:08.780 [2024-12-16 13:31:23.067971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.068009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:08.780 [2024-12-16 13:31:23.068021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:08.780 [2024-12-16 13:31:23.068028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.068047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:08.780 [2024-12-16 13:31:23.070295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.070324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:08.780 [2024-12-16 13:31:23.070332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.236 ms 00:27:08.780 [2024-12-16 13:31:23.070339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.070530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.070541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:08.780 [2024-12-16 13:31:23.070548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:27:08.780 [2024-12-16 13:31:23.070554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.071759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.071780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:08.780 [2024-12-16 13:31:23.071788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.192 ms 00:27:08.780 [2024-12-16 13:31:23.071794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.072650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.072670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:08.780 [2024-12-16 13:31:23.072678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:27:08.780 [2024-12-16 13:31:23.072684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.081168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.081196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:08.780 [2024-12-16 13:31:23.081205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.458 ms 00:27:08.780 [2024-12-16 13:31:23.081212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.085665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.085693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:08.780 [2024-12-16 13:31:23.085702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.426 ms 00:27:08.780 [2024-12-16 13:31:23.085708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.085771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.085779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:08.780 [2024-12-16 13:31:23.085786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:08.780 [2024-12-16 13:31:23.085792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.093039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.093061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:08.780 [2024-12-16 13:31:23.093068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.234 ms 00:27:08.780 [2024-12-16 13:31:23.093074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.100663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.100687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:08.780 [2024-12-16 13:31:23.100694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.565 ms 00:27:08.780 [2024-12-16 13:31:23.100700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.108152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.108176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:08.780 [2024-12-16 13:31:23.108183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.428 ms 00:27:08.780 [2024-12-16 13:31:23.108189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.115528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.780 [2024-12-16 13:31:23.115640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:08.780 [2024-12-16 13:31:23.115652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.293 ms 00:27:08.780 [2024-12-16 13:31:23.115657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.780 [2024-12-16 13:31:23.115682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:08.780 [2024-12-16 13:31:23.115695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:08.780 [2024-12-16 13:31:23.115707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:08.780 [2024-12-16 13:31:23.115713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:08.780 [2024-12-16 13:31:23.115720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:08.780 [2024-12-16 13:31:23.115817] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:08.780 [2024-12-16 13:31:23.115824] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 26f5be5f-0c91-41c5-8f7e-90baa067dba4 00:27:08.780 [2024-12-16 13:31:23.115830] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:08.780 [2024-12-16 13:31:23.115836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:08.780 [2024-12-16 13:31:23.115842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:08.780 [2024-12-16 13:31:23.115849] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:08.780 [2024-12-16 13:31:23.115854] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:08.780 [2024-12-16 13:31:23.115860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:08.780 [2024-12-16 13:31:23.115867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:08.780 [2024-12-16 13:31:23.115872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:08.780 [2024-12-16 13:31:23.115877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:08.780 [2024-12-16 13:31:23.115883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.781 [2024-12-16 13:31:23.115890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:08.781 [2024-12-16 13:31:23.115897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:27:08.781 [2024-12-16 13:31:23.115904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.126289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.781 [2024-12-16 13:31:23.126313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:08.781 [2024-12-16 13:31:23.126321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.365 ms 00:27:08.781 [2024-12-16 13:31:23.126327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.126488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.781 [2024-12-16 13:31:23.126495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:08.781 [2024-12-16 13:31:23.126506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.147 ms 00:27:08.781 [2024-12-16 13:31:23.126511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.163712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.163741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:08.781 [2024-12-16 13:31:23.163749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.163756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.163782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.163789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:08.781 [2024-12-16 13:31:23.163799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.163804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.163859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.163866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:08.781 [2024-12-16 13:31:23.163873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.163878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.163892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.163899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:08.781 [2024-12-16 13:31:23.163905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.163913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.225752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.225789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:08.781 [2024-12-16 13:31:23.225799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.225806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.249811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.249839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:08.781 [2024-12-16 13:31:23.249852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.249858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.249911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.249920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:08.781 [2024-12-16 13:31:23.249927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.249933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.249967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.249974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:08.781 [2024-12-16 13:31:23.249980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.249986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.250065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.250073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:08.781 [2024-12-16 13:31:23.250080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.250086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.250114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.250121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:08.781 [2024-12-16 13:31:23.250127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.250134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.250169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.250176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:08.781 [2024-12-16 13:31:23.250182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.250188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.250227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:08.781 [2024-12-16 13:31:23.250235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:08.781 [2024-12-16 13:31:23.250241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:08.781 [2024-12-16 13:31:23.250247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.781 [2024-12-16 13:31:23.250360] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 182.361 ms, result 0 00:27:09.723 13:31:23 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:09.723 13:31:23 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:09.723 13:31:23 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:09.723 13:31:23 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:09.723 13:31:23 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:09.723 13:31:23 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:09.723 Remove shared memory files 00:27:09.723 13:31:23 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:09.723 13:31:23 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:09.723 13:31:23 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:09.723 13:31:23 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:09.723 13:31:23 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78872 00:27:09.723 13:31:23 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:09.723 13:31:23 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:09.723 ************************************ 00:27:09.723 END TEST ftl_upgrade_shutdown 00:27:09.723 ************************************ 00:27:09.723 00:27:09.723 real 1m19.064s 00:27:09.723 user 1m52.041s 00:27:09.723 sys 0m18.465s 00:27:09.723 13:31:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:09.723 13:31:23 -- common/autotest_common.sh@10 -- # set +x 00:27:09.723 13:31:23 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:09.723 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:09.723 13:31:23 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:09.723 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:09.723 Process with pid 70327 is not found 00:27:09.723 13:31:23 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:09.723 13:31:23 -- ftl/ftl.sh@14 -- # killprocess 70327 00:27:09.723 13:31:23 -- common/autotest_common.sh@936 -- # '[' -z 70327 ']' 00:27:09.723 13:31:23 -- common/autotest_common.sh@940 -- # kill -0 70327 00:27:09.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70327) - No such process 00:27:09.723 13:31:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70327 is not found' 00:27:09.723 13:31:23 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:09.723 13:31:23 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79280 00:27:09.723 13:31:23 -- ftl/ftl.sh@20 -- # waitforlisten 79280 00:27:09.723 13:31:23 -- common/autotest_common.sh@829 -- # '[' -z 79280 ']' 00:27:09.723 13:31:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.723 13:31:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.723 13:31:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.723 13:31:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.723 13:31:23 -- common/autotest_common.sh@10 -- # set +x 00:27:09.723 13:31:23 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.723 [2024-12-16 13:31:24.072564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:09.723 [2024-12-16 13:31:24.072692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79280 ] 00:27:09.723 [2024-12-16 13:31:24.219905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.985 [2024-12-16 13:31:24.481243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.985 [2024-12-16 13:31:24.481487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.369 13:31:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.369 13:31:25 -- common/autotest_common.sh@862 -- # return 0 00:27:11.369 13:31:25 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:11.369 nvme0n1 00:27:11.369 13:31:25 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:11.369 13:31:25 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:11.369 13:31:25 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:11.630 13:31:26 -- ftl/common.sh@28 -- # stores=ce810e5c-ce9a-492d-bb44-05c42f5c5193 00:27:11.630 13:31:26 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:11.630 13:31:26 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce810e5c-ce9a-492d-bb44-05c42f5c5193 00:27:11.891 13:31:26 -- ftl/ftl.sh@23 -- # killprocess 79280 00:27:11.891 13:31:26 -- common/autotest_common.sh@936 -- # '[' -z 79280 ']' 00:27:11.891 13:31:26 -- common/autotest_common.sh@940 -- # kill -0 79280 00:27:11.891 13:31:26 -- common/autotest_common.sh@941 -- # uname 00:27:11.891 13:31:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:11.891 13:31:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79280 00:27:11.891 killing process with pid 79280 00:27:11.891 13:31:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:11.891 13:31:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:11.891 13:31:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79280' 00:27:11.891 13:31:26 -- common/autotest_common.sh@955 -- # kill 79280 00:27:11.891 13:31:26 -- common/autotest_common.sh@960 -- # wait 79280 00:27:13.276 13:31:27 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:13.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.276 Waiting for block devices as requested 00:27:13.276 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.537 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.537 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.537 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:18.822 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:18.822 Remove shared memory files 00:27:18.822 13:31:33 -- ftl/ftl.sh@28 -- # remove_shm 00:27:18.822 13:31:33 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:18.822 13:31:33 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:18.822 13:31:33 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:18.822 13:31:33 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:18.822 13:31:33 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:18.822 13:31:33 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:18.822 ************************************ 00:27:18.822 END TEST ftl 00:27:18.822 ************************************ 00:27:18.822 00:27:18.822 real 13m0.559s 00:27:18.822 user 15m20.597s 00:27:18.822 sys 1m20.205s 00:27:18.822 13:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:18.822 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:27:18.822 13:31:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:18.822 13:31:33 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:18.822 13:31:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:18.822 13:31:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:18.822 13:31:33 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:18.822 13:31:33 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:18.822 13:31:33 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:18.822 13:31:33 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:18.822 13:31:33 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:18.822 13:31:33 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:18.822 13:31:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.822 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:27:18.822 13:31:33 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:18.822 13:31:33 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:18.822 13:31:33 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:18.822 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.208 INFO: APP EXITING 00:27:20.208 INFO: killing all VMs 00:27:20.208 INFO: killing vhost app 00:27:20.208 INFO: EXIT DONE 00:27:20.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:20.775 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:20.775 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:20.775 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:20.775 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:21.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.388 Cleaning 00:27:21.388 Removing: /var/run/dpdk/spdk0/config 00:27:21.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:21.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:21.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:21.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:21.388 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:21.388 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:21.388 Removing: /var/run/dpdk/spdk0 00:27:21.388 Removing: /var/run/dpdk/spdk_pid55983 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56179 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56473 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56578 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56662 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56771 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56857 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56891 00:27:21.388 Removing: /var/run/dpdk/spdk_pid56933 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57008 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57092 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57516 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57582 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57634 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57650 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57749 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57765 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57862 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57874 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57927 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57945 00:27:21.388 Removing: /var/run/dpdk/spdk_pid57998 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58016 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58166 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58208 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58289 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58349 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58380 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58453 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58472 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58509 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58535 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58576 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58602 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58643 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58669 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58710 00:27:21.388 Removing: /var/run/dpdk/spdk_pid58736 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58772 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58798 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58839 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58859 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58900 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58926 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58967 00:27:21.650 Removing: /var/run/dpdk/spdk_pid58995 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59031 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59058 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59099 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59127 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59168 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59188 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59229 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59252 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59291 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59317 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59358 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59384 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59425 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59451 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59492 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59521 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59565 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59594 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59638 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59659 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59700 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59726 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59768 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59851 00:27:21.650 Removing: /var/run/dpdk/spdk_pid59963 00:27:21.650 Removing: /var/run/dpdk/spdk_pid60136 00:27:21.650 Removing: /var/run/dpdk/spdk_pid60228 00:27:21.650 Removing: /var/run/dpdk/spdk_pid60270 00:27:21.650 Removing: /var/run/dpdk/spdk_pid60707 00:27:21.650 Removing: /var/run/dpdk/spdk_pid60875 00:27:21.650 Removing: /var/run/dpdk/spdk_pid60995 00:27:21.650 Removing: /var/run/dpdk/spdk_pid61043 00:27:21.650 Removing: /var/run/dpdk/spdk_pid61074 00:27:21.650 Removing: /var/run/dpdk/spdk_pid61151 00:27:21.650 Removing: /var/run/dpdk/spdk_pid61805 00:27:21.650 Removing: /var/run/dpdk/spdk_pid61842 00:27:21.650 Removing: /var/run/dpdk/spdk_pid62332 00:27:21.650 Removing: /var/run/dpdk/spdk_pid62454 00:27:21.650 Removing: /var/run/dpdk/spdk_pid62569 00:27:21.650 Removing: /var/run/dpdk/spdk_pid62622 00:27:21.650 Removing: /var/run/dpdk/spdk_pid62647 00:27:21.650 Removing: /var/run/dpdk/spdk_pid62678 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64606 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64745 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64750 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64762 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64811 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64819 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64832 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64887 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64891 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64908 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64953 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64957 00:27:21.650 Removing: /var/run/dpdk/spdk_pid64969 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66396 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66494 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66623 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66716 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66800 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66877 00:27:21.650 Removing: /var/run/dpdk/spdk_pid66971 00:27:21.650 Removing: /var/run/dpdk/spdk_pid67045 00:27:21.650 Removing: /var/run/dpdk/spdk_pid67186 00:27:21.650 Removing: /var/run/dpdk/spdk_pid67580 00:27:21.650 Removing: /var/run/dpdk/spdk_pid67617 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68055 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68235 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68339 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68450 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68503 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68534 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68837 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68899 00:27:21.650 Removing: /var/run/dpdk/spdk_pid68981 00:27:21.650 Removing: /var/run/dpdk/spdk_pid69372 00:27:21.650 Removing: /var/run/dpdk/spdk_pid69516 00:27:21.650 Removing: /var/run/dpdk/spdk_pid70327 00:27:21.650 Removing: /var/run/dpdk/spdk_pid70465 00:27:21.650 Removing: /var/run/dpdk/spdk_pid70630 00:27:21.650 Removing: /var/run/dpdk/spdk_pid70733 00:27:21.650 Removing: /var/run/dpdk/spdk_pid71079 00:27:21.650 Removing: /var/run/dpdk/spdk_pid71333 00:27:21.650 Removing: /var/run/dpdk/spdk_pid71694 00:27:21.650 Removing: /var/run/dpdk/spdk_pid71897 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72051 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72108 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72295 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72320 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72380 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72616 00:27:21.650 Removing: /var/run/dpdk/spdk_pid72874 00:27:21.650 Removing: /var/run/dpdk/spdk_pid73501 00:27:21.650 Removing: /var/run/dpdk/spdk_pid74191 00:27:21.650 Removing: /var/run/dpdk/spdk_pid74784 00:27:21.650 Removing: /var/run/dpdk/spdk_pid75651 00:27:21.650 Removing: /var/run/dpdk/spdk_pid75806 00:27:21.650 Removing: /var/run/dpdk/spdk_pid75882 00:27:21.650 Removing: /var/run/dpdk/spdk_pid76426 00:27:21.650 Removing: /var/run/dpdk/spdk_pid76480 00:27:21.650 Removing: /var/run/dpdk/spdk_pid77083 00:27:21.650 Removing: /var/run/dpdk/spdk_pid77541 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78304 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78428 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78477 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78541 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78598 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78656 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78872 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78917 00:27:21.911 Removing: /var/run/dpdk/spdk_pid78991 00:27:21.911 Removing: /var/run/dpdk/spdk_pid79064 00:27:21.911 Removing: /var/run/dpdk/spdk_pid79105 00:27:21.911 Removing: /var/run/dpdk/spdk_pid79172 00:27:21.911 Removing: /var/run/dpdk/spdk_pid79280 00:27:21.911 Clean 00:27:21.911 killing process with pid 48176 00:27:21.911 killing process with pid 48180 00:27:21.911 13:31:36 -- common/autotest_common.sh@1446 -- # return 0 00:27:21.911 13:31:36 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:21.911 13:31:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.911 13:31:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.911 13:31:36 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:21.911 13:31:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.911 13:31:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.911 13:31:36 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:21.911 13:31:36 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:21.911 13:31:36 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:21.911 13:31:36 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:21.911 13:31:36 -- spdk/autotest.sh@383 -- # hostname 00:27:21.911 13:31:36 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:22.172 geninfo: WARNING: invalid characters removed from testname! 00:27:48.757 13:31:59 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.757 13:32:01 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:49.700 13:32:04 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.629 13:32:05 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:53.545 13:32:08 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:55.460 13:32:09 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:57.378 13:32:11 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:57.378 13:32:11 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:57.378 13:32:11 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:57.378 13:32:11 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:57.378 13:32:11 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:57.378 13:32:11 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:57.378 13:32:11 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:57.378 13:32:11 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:57.378 13:32:11 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:57.378 13:32:11 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:57.378 13:32:11 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:57.378 13:32:11 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:57.378 13:32:11 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:57.378 13:32:11 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:57.378 13:32:11 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:57.378 13:32:11 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:57.378 13:32:11 -- scripts/common.sh@343 -- $ case "$op" in 00:27:57.378 13:32:11 -- scripts/common.sh@344 -- $ : 1 00:27:57.378 13:32:11 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:57.378 13:32:11 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.378 13:32:11 -- scripts/common.sh@364 -- $ decimal 1 00:27:57.378 13:32:11 -- scripts/common.sh@352 -- $ local d=1 00:27:57.378 13:32:11 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:57.378 13:32:11 -- scripts/common.sh@354 -- $ echo 1 00:27:57.378 13:32:11 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:57.378 13:32:11 -- scripts/common.sh@365 -- $ decimal 2 00:27:57.378 13:32:11 -- scripts/common.sh@352 -- $ local d=2 00:27:57.378 13:32:11 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:57.378 13:32:11 -- scripts/common.sh@354 -- $ echo 2 00:27:57.378 13:32:11 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:57.378 13:32:11 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:57.378 13:32:11 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:57.378 13:32:11 -- scripts/common.sh@367 -- $ return 0 00:27:57.378 13:32:11 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.378 13:32:11 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:57.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.378 --rc genhtml_branch_coverage=1 00:27:57.378 --rc genhtml_function_coverage=1 00:27:57.378 --rc genhtml_legend=1 00:27:57.378 --rc geninfo_all_blocks=1 00:27:57.378 --rc geninfo_unexecuted_blocks=1 00:27:57.378 00:27:57.378 ' 00:27:57.378 13:32:11 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:57.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.378 --rc genhtml_branch_coverage=1 00:27:57.378 --rc genhtml_function_coverage=1 00:27:57.378 --rc genhtml_legend=1 00:27:57.378 --rc geninfo_all_blocks=1 00:27:57.378 --rc geninfo_unexecuted_blocks=1 00:27:57.378 00:27:57.378 ' 00:27:57.378 13:32:11 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:57.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.378 --rc genhtml_branch_coverage=1 00:27:57.378 --rc genhtml_function_coverage=1 00:27:57.378 --rc genhtml_legend=1 00:27:57.378 --rc geninfo_all_blocks=1 00:27:57.378 --rc geninfo_unexecuted_blocks=1 00:27:57.378 00:27:57.378 ' 00:27:57.378 13:32:11 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:57.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.378 --rc genhtml_branch_coverage=1 00:27:57.378 --rc genhtml_function_coverage=1 00:27:57.378 --rc genhtml_legend=1 00:27:57.378 --rc geninfo_all_blocks=1 00:27:57.378 --rc geninfo_unexecuted_blocks=1 00:27:57.378 00:27:57.378 ' 00:27:57.378 13:32:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:57.378 13:32:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:57.378 13:32:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.378 13:32:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.378 13:32:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.378 13:32:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.378 13:32:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.378 13:32:11 -- paths/export.sh@5 -- $ export PATH 00:27:57.378 13:32:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.378 13:32:11 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:57.378 13:32:11 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:57.378 13:32:11 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734355931.XXXXXX 00:27:57.378 13:32:11 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734355931.bmWkHf 00:27:57.378 13:32:11 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:57.378 13:32:11 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:27:57.378 13:32:11 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:57.378 13:32:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:57.378 13:32:11 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:57.378 13:32:11 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:57.378 13:32:11 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:57.378 13:32:11 -- common/autotest_common.sh@10 -- $ set +x 00:27:57.378 13:32:11 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:27:57.378 13:32:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:57.378 13:32:11 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:57.378 13:32:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:57.378 13:32:11 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:57.378 13:32:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:57.378 13:32:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:57.378 13:32:11 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:57.378 13:32:11 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:57.378 13:32:11 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:57.378 13:32:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:57.378 + [[ -n 4991 ]] 00:27:57.378 + sudo kill 4991 00:27:57.390 [Pipeline] } 00:27:57.405 [Pipeline] // timeout 00:27:57.411 [Pipeline] } 00:27:57.425 [Pipeline] // stage 00:27:57.430 [Pipeline] } 00:27:57.446 [Pipeline] // catchError 00:27:57.455 [Pipeline] stage 00:27:57.457 [Pipeline] { (Stop VM) 00:27:57.470 [Pipeline] sh 00:27:57.752 + vagrant halt 00:28:00.295 ==> default: Halting domain... 00:28:06.889 [Pipeline] sh 00:28:07.173 + vagrant destroy -f 00:28:09.711 ==> default: Removing domain... 00:28:10.296 [Pipeline] sh 00:28:10.580 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:10.591 [Pipeline] } 00:28:10.609 [Pipeline] // stage 00:28:10.622 [Pipeline] } 00:28:10.665 [Pipeline] // dir 00:28:10.676 [Pipeline] } 00:28:10.689 [Pipeline] // wrap 00:28:10.693 [Pipeline] } 00:28:10.700 [Pipeline] // catchError 00:28:10.705 [Pipeline] stage 00:28:10.707 [Pipeline] { (Epilogue) 00:28:10.716 [Pipeline] sh 00:28:11.049 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:16.333 [Pipeline] catchError 00:28:16.335 [Pipeline] { 00:28:16.346 [Pipeline] sh 00:28:16.627 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:16.627 Artifacts sizes are good 00:28:16.637 [Pipeline] } 00:28:16.650 [Pipeline] // catchError 00:28:16.660 [Pipeline] archiveArtifacts 00:28:16.667 Archiving artifacts 00:28:16.759 [Pipeline] cleanWs 00:28:16.770 [WS-CLEANUP] Deleting project workspace... 00:28:16.770 [WS-CLEANUP] Deferred wipeout is used... 00:28:16.777 [WS-CLEANUP] done 00:28:16.778 [Pipeline] } 00:28:16.793 [Pipeline] // stage 00:28:16.798 [Pipeline] } 00:28:16.811 [Pipeline] // node 00:28:16.817 [Pipeline] End of Pipeline 00:28:16.858 Finished: SUCCESS